With Amazon SageMaker, you pay only for what you use. Building, training and hosting is billed by the second, with no minimum fees and no upfront commitments. Pricing within Amazon SageMaker is broken down by on-demand ML instances, ML storage, and fees for data processing in notebooks and hosting instances.
As part of the AWS Free Tier, you can get started with Amazon SageMaker for free. For the first two months after sign-up, you are offered a monthly free tier of 250 hours of t2.medium notebook usage for building your models, plus 50 hours of m4.xlarge for training, and a combined total of 125 hours of m4.xlarge for deploying your machine learning models for real-time inferencing and batch transform with Amazon SageMaker.
On-Demand ML instances
For building, training, and deploying your models on Amazon SageMaker, on-demand ML instances let you pay for machine learning compute capacity by the second, with a one-minute minimum, with no long-term commitments. This frees you from the costs and complexities of planning, purchasing, and maintaining hardware, and transforms what are commonly large fixed costs into much smaller variable costs. Pricing is per instance-hour consumed for each instance, from the time an instance is available for use until it is terminated or stopped. Each partial instance-hour consumed will be billed per-second, with a one-minute minimum.
ML General Purpose storage
For model training, Amazon SageMaker provides you with the ability to select up to 6 TB of associated General Purpose (SSD) storage capacity for your training data. For notebook, model training, and model hosting, General Purpose (SSD) storage capacity is also added for temporary data storage. With General Purpose (SSD), you will be charged for this storage. However, you will not be charged for the I/Os consumed.
During building and hosting your models, data processed by Amazon SageMaker is pulled into and out of notebook instances, and model hosting instances.
US East (N. Virginia)
US East (Ohio)
US West (Oregon)
Asia Pacific (Seoul)
Asia Pacific (Sydney)
Asia Pacific (Tokyo)
AWS GovCloud (US)
Pricing Example #1
A data scientist has spent a week working on a model for a new idea. She uses an ml.t2.medium Juptyer notebook for 105 hours, trains four times on an ml.m4.4xlarge for 30 minutes per training run, and deploys to an ml.t2.medium for 10 minutes each time for the evaluation. She prepares 3 GB of data into a notebook, and pushes 2 GB into Amazon S3. The evaluation dataset is 1 GB of data, and inferences are 1/10 the size of the input data.
|Hours||Notebook Instance||Training Instance||Hosting Instance
||Cost per hour
|GB processed In - Notebooks
||GB processed Out - Notebooks
||GB data In - Hosting||GB Data Out - Hosting||Cost per GB In or Out||Total|
|4 * 0.1 = 0.4GB||4 * 1 = 4GB||4 * 1 = 4 GB||4 * 0.1 = 0.4 GB||$0.016||$0.1408|
The sub-total for authoring, training, hosting = $7.1555.
The sub-total for pulling in 3GB of training data to notebooks (3 * $0.016= $0.048) and moving 2GB into Amazon S3 (2 * $0.016=$ 0.32) = $0.08
The sub-total for Data Processed In and Out of Notebooks and Hosting = $0.1408.
The Total for this workflow example is $7.3763.
Pricing Example #2
The model in Example #1 is then deployed to production to three (3) ml.t2.medium for reliable multi-AZ hosting, and is regularly retrained once per week. The model receives 100MB of data per day, and inferences are 1/10 the size of the input data.
|Hours per month||Training Instances||Hosting instances||Cost per hour||Total|
|24 * 31 * 3 = 2232||ml.t2.medium||$0.065||$145.08|
|Data In per Month - Hosting||Data Out per Month- Hosting||Cost per GB In or Out||Total|
|100MB * 31 = 3100MB||$0.016||$0.0496
|10MB * 31 = 310MB||$0.016||$0.00496|
The sub-total for training and hosting = $149.56; The sub-total for 3100 MB of data processed In and 310MB of data processed Out for Hosting per month = $0.05456. The Total for this workflow example would be $149.61 per month.
Pricing Example #3
Amazon SageMaker Batch Transform only charges you for the instances used during while your jobs are executing. If your data is already in Amazon S3, then there is no cost for reading input data from S3 and writing output data to S3.
The model in Example #1 is used to run Batch Transform. The data scientist runs four separate Batch Transform jobs on 3 ml.m4.4xlarge for 15 minutes per job run. She uploads an evaluation dataset of 1 GB in S3 for each run, and inferences are 1/10 the size of the input data which are stored back in S3.
|Hours||Training Instances||Cost per hour||Total|
|3 * 0.25 * 4 = 3 hours||ml.m4.xlarge||$1.12||$3.36|
|GB data In - Batch Transform
||GB data Out - Batch Transform||Cost per GB In or Out||Total|
The sub-total for Batch Transform job = $3.36; The sub-total for 4.4 GB into Amazon S3 = 0. The Total for this workflow example would be $3.36.