Amazon Sagemaker
Amazon SageMaker is a fully-managed platform that enables developers and data scientists to quickly and easily build, train, and deploy machine learning models at any scale. With Amazon SageMaker, all the barriers and complexity that typically slow down developers who want to use machine learning are removed. The service includes models that can be used together or independently to build, train, and deploy your machine learning models.

Degas 100M
By:
Latest Version:
1.0
A geospatial foundational model that can be fine-tuned to your specific earth observation tasks.
Product Overview
Degas 100M is a pretrained cutting-edge geospatial foundational model designed for highly accurate earth observation. It uses an in-house developed spatio-temporal SwinMAE architecture to produce a robust geospatial model that can seamlessly adapt to custom downstream tasks. It handles both single capture imagery and multi-timestamp (3 steps at most) data as input, making it a reliable option both for land cover analysis and temporal change detection.
Key Data
Version
By
Type
Algorithm
Highlights
Easy to finetune: We offer a simple interface that automates the process of training and deploying the model via SageMaker. With just a Jupyter notebook you can easily finetune the model into you specific tasks and quickly deploy it into production.
Input versatility: Designed for straightforward implementation into your existing workflows. It handles 3+ channels for both single capture imagery and multi-timestamp captures.
High performance and cost reduction: Degas 100M demonstrates superior performance when compared to state-of-the-art geospatial foundational models. We demonstrated improvements on flood mapping, wildfire scar mapping and land cover classification. It particularly excels at the latter, with a 10.4% accuracy improvement on the PhilEO benchmark dataset. More details on the technical paper or contact us directly: sales@degasafrica.com.
Not quite sure what you’re looking for? AWS Marketplace can help you find the right solution for your use case. Contact us
Pricing Information
Use this tool to estimate the software and infrastructure costs based your configuration choices. Your usage and costs might be different from this estimate. They will be reflected on your monthly AWS billing reports.
Contact us to request contract pricing for this product.
Estimating your costs
Choose your region and launch option to see the pricing details. Then, modify the estimated price by choosing different instance types.
Version
Region
Software Pricing
Algorithm Training$1.5/hr
running on ml.p3.2xlarge
Model Realtime Inference$1.00/hr
running on ml.p3.2xlarge
Model Batch Transform$1.00/hr
running on ml.p3.2xlarge
Infrastructure PricingWith Amazon SageMaker, you pay only for what you use. Training and inference is billed by the second, with no minimum fees and no upfront commitments. Pricing within Amazon SageMaker is broken down by on-demand ML instances, ML storage, and fees for data processing in notebooks and inference instances.
Learn more about SageMaker pricing
With Amazon SageMaker, you pay only for what you use. Training and inference is billed by the second, with no minimum fees and no upfront commitments. Pricing within Amazon SageMaker is broken down by on-demand ML instances, ML storage, and fees for data processing in notebooks and inference instances.
Learn more about SageMaker pricing
SageMaker Algorithm Training$3.825/host/hr
running on ml.p3.2xlarge
SageMaker Realtime Inference$3.825/host/hr
running on ml.p3.2xlarge
SageMaker Batch Transform$3.825/host/hr
running on ml.p3.2xlarge
Algorithm Training
For algorithm training in Amazon SageMaker, the software is priced based on hourly pricing that can vary by instance type. Additional infrastructure cost, taxes or fees may apply.InstanceType | Algorithm/hr | |
---|---|---|
ml.p3.2xlarge Vendor Recommended | $1.50 |
Usage Information
Training
Training Input data
Supported Types: tiff.
For our input datasets, we adopt the same data format as used in the burn scars dataset. In this format, data should be arranged into a collection of tiff files and their corresponding masks. Tiff files should be square scenes where the first dimension represents the number of bands, and the other two are the spatial dimensions. The mask is used to associate each pixel with a label. The number of mask channels is equal to the number of classes in the scene, plus an extra band for missing data.
Channel specification
Fields marked with * are required
training
*Input channel that provides training data
Input modes: File
Content types: -
Compression types: None
Hyperparameters
Fields marked with * are required
training_args
*training parameters
Type: FreeText
Tunable: No
task_args
*task parameters
Type: FreeText
Tunable: No
Model input and output details
Input
Summary
Example input(s): Here you can see example datasets: burn scars and multi temporal crop classification .
Input MIME type
application/x-recordioSample input data
Output
Summary
The finetuned model consumes the numpy data and returns the segmentation/classification mask.
Output MIME type
application/x-recordioSample output data
Sample notebook
Additional Resources
End User License Agreement
By subscribing to this product you agree to terms and conditions outlined in the product End user License Agreement (EULA)
Support Information
AWS Infrastructure
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Learn MoreRefund Policy
Degas does not offer refunds in any cases.
Customer Reviews
There are currently no reviews for this product.
View allWrite a review
Share your thoughts about this product.
Write a customer review