With Amazon SageMaker, you pay only for what you use. Building, training, and deploying ML models is billed by the second, with no minimum fees and no upfront commitments. Pricing within Amazon SageMaker is broken down by on-demand ML instances, ML storage, and fees for data processing in notebooks and hosting instances.
Try Amazon SageMaker for two months, free!
As part of the AWS Free Tier, you can get started with Amazon SageMaker for free. If you have never used Amazon SageMaker before, for the first two months, you are offered a monthly free tier of 250 hours of t2.medium or t3.medium notebook usage for building your models, plus 50 hours of m4.xlarge or m5.xlarge for training, plus 125 hours of m4.xlarge or m5.xlarge for deploying your machine learning models for real-time inferencing and batch transform with Amazon SageMaker. Your free tier starts from the first month when you create your first SageMaker resource.
Included with Amazon SageMaker Training and Hosting
When you train your models in Amazon SageMaker and enable Amazon SageMaker Debugger, you can use built in rules for debugging or write your own custom rules, or both. SageMaker Debugger provides a fully managed experience for running both built-in and custom rules as Amazon SageMaker Processing jobs. For built-in rules, there is no charge and Amazon SageMaker Debugger will automatically select an instance type. For custom rules, you will need to choose an instance (e.g. ml.m5.xlarge) and you will be charged for the duration for which the instance is in use for the Amazon SageMaker Processing job.
When you deploy your models as Amazon SageMaker endpoints for real-time inference and enable Amazon SageMaker Model Monitor, you can use built-in rules to monitor your models or write your own custom rules, or both. Model Monitor provides a fully managed experience for running both built-in and custom rules as Amazon SageMaker Processing jobs. For built-in rules with ml.m5.xlarge instance, you get up to 30 hours of monitoring aggregated across all endpoints each month, at no charge. Additional usage beyond 30 hours or usage for other ML instance types will be charged for the duration for which the instance is in use at the Amazon SageMaker Processing on demand rate.
Choice of Amazon EC2 On-demand and Spot Instances
With Amazon SageMaker you have the choice of choosing from Amazon EC2 On-Demand instances or Amazon EC2 Spot instances. For building, training, and deploying your models on Amazon SageMaker, on-demand ML instances let you pay for machine learning compute capacity by the second, with a one-minute minimum, with no long-term commitments. This frees you from the costs and complexities of planning, purchasing, and maintaining hardware, and transforms what are commonly large fixed costs into much smaller variable costs. Pricing is per instance-hour consumed for each instance, from the time an instance is available for use until it is terminated or stopped. Each partial instance-hour consumed will be billed per-second, with a one-minute minimum.
For training your ML models, you have the choice of using Amazon EC2 Spot instances with Managed Spot Training. This option can help reduce the cost of training your machine learning models by up to 90%. Once a Managed Spot Training job completes, you can calculate the cost savings as the percentage difference between the duration for which the training job ran and the duration for which you were billed. The cost savings is also visible on the AWS management console.
When building and deploying your model, you have the option to attach fractional GPU compute capacity to your Amazon SageMaker endpoint using Amazon Elastic Inference in select AWS Regions. If you choose to add an Amazon Elastic Inference accelerator, you will be billed for the accelerator hours. For more details on inference acceleration with Amazon SageMaker, see the Amazon Elastic Inference website. For regional availability of Amazon Elastic Inference, see the Region table.
ML General Purpose storage
For model training, Amazon SageMaker provides you with the ability to select up to 6 TB of associated General Purpose (SSD) storage capacity for your training data. For notebook, model training, and model hosting, General Purpose (SSD) storage capacity is also added for temporary data storage. With General Purpose (SSD), you will be charged for this storage. However, you will not be charged for the I/Os consumed.
During building and hosting your models, data processed by Amazon SageMaker is pulled into and out of notebook instances, and model hosting instances.
US East (N. Virginia)
US East (Ohio)
US West (Oregon)
US West (N. California)
AWS GovCloud (US)
Canada Central (Montreal)
Asia Pacific (Hong Kong)
Asia Pacific (Mumbai)
Asia Pacific (Seoul)
Asia Pacific (Singapore)
Asia Pacific (Sydney)
Asia Pacific (Tokyo)
South America (Sao Paulo)
Middle East (Bahrain)
Pricing Example #1
A data scientist has spent a week working on a model for a new idea. She trains the model 4 times on an ml.m4.4xlarge for 30 minutes per training run with Amazon SageMaker Debugger enabled using 2 built-in rules and 1 custom rule that she wrote. For the custom rule, she specified ml.m5.xlarge instance. She trains using 3 GB of training data in Amazon S3, and pushes 1 GB model output into Amazon S3. SageMaker creates General Purpose SSD (gp2) Volumes for each Training instance. SageMaker also creates General Purpose SSD (gp2) Volumes for each rule specified. In this example a total of 4 General Purpose SSD (gp2) Volumes will be created. SageMaker Debugger emits 1 GB of debug data to customer’s Amazon S3.
|Hours||Training Instance||Debug Instance||Cost per hour
|4 * 0.5 = 2.00
|4 * 0.5 * 2 = 4
||No additional charges for built-in rule instances||$0
|4 * 0.5 = 2
|General Purpose (SSD) Storage for Training (GB)
||General Purpose (SSD) Storage for Debugger built-in rules (GB)||General Purpose (SSD) Storage for Debugger custom rules (GB)||Cost per GB-Month||Sub total|
|Cost||$0.00083||No additional charges for built-in rule storage volumes
The total charges for training, debugging in this example are $2.7811. The compute instances and general purpose storage volumes used by SageMaker Debugger built-in rules do not incur additional charges.
Pricing Example #2
The model in Example #1 is then deployed to production to two (2) ml.c5.xlarge instances for reliable multi-AZ hosting. Amazon SageMaker Model Monitor is enabled with one (1) ml.m5.4xlarge instance and monitoring jobs are scheduled once per day. Each monitoring job take 5 minutes to complete. The model receives 100MB of data per day, and inferences are 1/10 the size of the input data.
|Hours per month||Hosting instances||Model Monitor Instances
||Cost per hour||Total|
|24 * 31 * 2 = 1488||ml.c5.xlarge||$0.238||$354.144|
|31*0.08 = 2.5||ml.m5.4xlarge||$1.075||$2.688|
|Data In per Month - Hosting||Data Out per Month - Hosting||Cost per GB In or Out||Total|
|100MB * 31 = 3100MB
|10MB * 31 = 310MB||$0.02||$0.006|
The sub-total for training, hosting, and monitoring = $356.832; The sub-total for 3100 MB of data processed In and 310MB of data processed Out for Hosting per month = $0.056. The Total for this workflow example would be $356.887 per month.
Note, for built-in rules with ml.m5.xlarge instance, you get up to 30 hours of monitoring aggregated across all endpoints each month, at no charge.
Pricing Example #3
Amazon SageMaker Batch Transform only charges you for the instances used during while your jobs are executing. If your data is already in Amazon S3, then there is no cost for reading input data from S3 and writing output data to S3.
The model in Example #1 is used to run Batch Transform. The data scientist runs four separate Batch Transform jobs on 3 ml.m4.4xlarge for 15 minutes per job run. She uploads an evaluation dataset of 1 GB in S3 for each run, and inferences are 1/10 the size of the input data which are stored back in S3.
|Hours||Training Instances||Cost per hour||Total|
|3 * 0.25 * 4 = 3 hours||ml.m4.xlarge||$1.12||$3.36|
|GB data In - Batch Transform
||GB data Out - Batch Transform||Cost per GB In or Out||Total|
The sub-total for Batch Transform job = $3.36; The sub-total for 4.4 GB into Amazon S3 = 0. The Total for this workflow example would be $3.36.
Pricing Example #4
Amazon SageMaker Processing only charges you for the instances used while your jobs are executing. When you provide the input data for processing in Amazon S3, Amazon SageMaker downloads the data from Amazon S3 to local file storage at the start of a processing job.
The data analyst runs a Processing job to preprocess and validate data on two ml.m5.4xlarge instances for a job duration of 10 minutes. She uploads a dataset of 100 GB in S3 as input for the processing job, and the output data which is roughly the same size is stored back in S3.
|Hours||Processing Instances||Cost per hour||Total|
|1 * 2 * 0.167 = 0.664||ml.m5.4xlarge||$1.075||$0.358|
|General Purpose (SSD) Storage (GB)
||Cost per hour||Total|
|100 GB * 2 = 200
The sub-total for Amazon SageMaker Processing job = $0.358;
The sub-total for 200 GB of general purpose SSD storage = $0.0032.
The total price for this example would be $0.3612