Focus on what is known
Don’t start from ground zero every time you want an algorithm to detect new characteristics or learn from highly complex multi-spectral imagery. Rather establish a stable starting point by categorizing images based on spectral profiles using a physical-model and go from there. Introduce a stable visual alphabet as a basis for your workflows with which to build meaning.
color33 is an automated, cloud-based, online processing service for deriving information from multi-spectral EO images. Without the need for training samples, it turns imagery calibrated to top-of-atmosphere reflectance into stable, comparable, sensor-independent spectral categories. Single images can be handled in near-real-time, but the service can be scaled to big image collections.
Unlike unsupervised clustering routines, color33 is based on a physical-model, producing a known set of spectral categories with known semantic associations. The service is parameter-free, fully-automated and application-independent.
Sound interesting? Get in touch!
Do you analyse Earth observation imagery?
color33 is an online processing service accessible via a web-frontend or API that can fit into the processes and workflows you may already use. The resulting spectral categories can:
- automate bi-temporal change detection
- automate the calculation of normalised occurrence of spectrally similar observations over time
- stratify images, so that algorithms only consider pixels that are similar in a multi-spectral space rather than processing entire images
- stratify training sample collection for machine-learning algorithms to ensure that your samples represent features of interest across the multi-spectral feature space they inhabit
- validate reflectance values from newly developed calibration or correction routines
- help you explore the spectral characteristics, spatio-temporal dynamics and heterogeneity of imagery content prior to application-specific analysis
How can you use the colors?
Let’s look at all Sentinel-2 imagery spanning from 9 August 2019 until 24 June 2020. Processed with color33, a few of these individual images look like the following:
Instead of being faced with a bunch of relfectance values, you can start working directly with each pixel based on the color it has been assigned, which has a semantic association:
However, while the colors (i.e. spectral categories) are actionable, generally there is no perfect and exclusive match between a semantic concept (e.g. water) and any of the colors. As with any other imagery analysis, some pixels representing different semantic concepts (e.g. water and deep shadow) have very similar multi-spectral profiles, so require additional information to be reliably distinguished from each other in analysis. For more advanced analysis, users can use the color categories to segment or stratify further analyses. For example, instead of calculating the NDVI across an entire image to then identify vegetation, the colors enable users to only calculate the index on pixels that look like vegetation based on their color to remove or minimise the need for thresholding to extract meaningful information about vegetation.
Are you an app developer?
color33 provides an API for processing imagery in near-real-time based on custom, spatial-temporal AOIs, and offers a reliable, cloud-based approach for semantic enrichment. Relieve the pain to maintain your own EO data infrastructure and just use color33 for what you need, when you need it.
We want to hear from you!
Interested in learning more? We appreciate your feedback at this early stage of developing the color33 service. Do not hesitate to contact us. If you are interested seeing how spectral categories generated by color33 are already being used, we’d love to share some existing information-generation workflows and on-top applications!
Let’s keep it simple. A friendly contact point:
University of Salzburg, Department of Geoinformatics – Z_GIS