Amazon SageMaker

Machine learning for every developer and data scientist.

Amazon SageMaker provides every developer and data scientist with the ability to build, train, and deploy machine learning models quickly. Amazon SageMaker is a fully-managed service that covers the entire machine learning workflow to label and prepare your data, choose an algorithm, train the model, tune and optimize it for deployment, make predictions, and take action. Your models get to production faster with much less effort and lower cost.

BUILD

Collect & prepare training data

Data labeling & pre-built notebooks for common problems

Choose & optimize your ML algorithm

Built-in, high-performance algorithms and hundreds of ready to use algorithms in AWS Marketplace.

TRAIN

Set up & manage environments for training

One-click training on the highest performing infrastructure

Train & tune model

Train once, run anywhere & model optimization

DEPLOY

Deploy model in production

One-click deployment

Scale & manage the production environment

Fully managed with auto-scaling for 75% less

Collect and prepare training data

Label training data fast

Amazon SageMaker Ground Truth helps you build and manage highly accurate training datasets quickly. Ground Truth offers easy access to public and private human labelers and provides them with pre-built workflows and interfaces for common labeling tasks. Additionally, Ground Truth will learn from human labels to make high quality, automatic annotations to significantly lower labeling costs.

Learn more »
70-percent-reduction-reversed
ground-truth-reversed

Hosted notebooks

Fully-managed Jupyter notebooks that you can use in the cloud or on your local machine to explore and visualize your data and develop your model. In addition to starting from scratch, you can choose from dozens of pre-built notebooks that you can use as-is, or modify to suit your specific needs, to make it easy to explore and visualize your training data quickly. Solutions are available for many common problems such as recommendations and personalization, fraud detection, forecasting, image classification, churn prediction, customer targeting, log processing and anomaly detection, and speech-to-text.

 

Learn more »
jupyter-logo-reversed
hosted-notebooks

Choose and optimize your machine learning algorithm

Amazon SageMaker automatically configures and optimizes TensorFlow, Apache MXNet, PyTorch, Chainer, Scikit-learn, SparkML, Horovod, Keras, and Gluon. Commonly used machine learning algorithms are built-in and tuned for scale, speed, and accuracy with over 200 additional pre-trained models and algorithms available in AWS Marketplace. You can also bring any other algorithm or framework by building it into a Docker container.

10x-faster
logo-tesnorflow
logo-mxnet
logo-pytorch
logo-chainer
logo-keras
logo-gluon
logo-horovod

Setup and manage training environments

One-click training

Begin training your model with a single click. Amazon SageMaker handles all of the underlying infrastructure to scale up to petabyte sized datasets easily.

 

 

 

 

The fastest distributed machine learning in the cloud.

Amazon EC2 P3 instances provide 8 NVIDIA Tesla GPUs.

64 scalable vCPUs Intel Xeon Skylake with AVX-512

 

25 GBPS networking throughput

 

16 GB of memory per GPU

 

The best place to run TensorFlow

AWS’ TensorFlow optimizations provide near-linear scaling efficiency across hundreds of GPUs to operate at cloud scale without a lot of processing overhead to train more accurate, more sophisticated models in much less time.

Fully managed training and hosting

Near-linear scaling across hundreds of GPUs

75% lower inference costs

 

STOCK TENSORFLOW
65p
AWS-OPTIMIZED TENSORFLOW
90p

Scaling Efficiency with 256 GPUs

Tune and optimize your model

Automatically tune your model

Automatic Model Tuning uses machine learning to quickly tune your model to be as accurate as possible. This capability lets you skip the tedious trial-and-error process of manually adjusting model parameters. Instead, over multiple training runs, Automatic Model Tuning performs hyperparameter optimization by discovering interesting features in your data and learning how those features interact to affect accuracy. You save days–or even weeks–of time maximizing the quality of your trained model.

Train once, run anywhere

Amazon SageMaker Neo lets you train a model once, and deploy it anywhere. Using machine learning, SageMaker Neo will automatically optimize any trained model built with a popular framework for the hardware platform you specify with no loss in accuracy. You can then deploy your model to EC2 instances and SageMaker instances, or any device at the edge that includes the Neo runtime, including AWS IoT Greengrass devices.

Both the Neo compiler and runtime are also available as open source software.

Learn more »

2x-75
1-tenth-75
4_neo-infographic

Deploy and manage models in production

One-click deploy to production

Amazon SageMaker makes it easy to deploy your trained model in production with a single click so that you can start generating predictions (a process called inference) for real-time or batch data. Your model runs on auto-scaling clusters of Amazon SageMaker instances that are spread across multiple availability zones to deliver both high performance and high availability. Amazon SageMaker also includes built-in A/B testing capabilities to help you test your model and experiment with different versions to achieve the best results.

75% lower

Reduce your deep learning inference costs by up to 75% using Amazon Elastic Inference to attach elastic GPU acceleration to your Amazon SageMaker instances easily. For most models, a full GPU instance is over-sized for inference. Also, it can be difficult to optimize the GPU, CPU, and memory needs of your deep learning application with a single instance type. Elastic Inference allows you to choose the instance type that is best suited to the overall CPU and memory needs of your application, and then separately configure the right amount of GPU acceleration required for inference.

Learn more »

Run models at the edge

AWS IoT Greengrass makes it easy to deploy models trained with Amazon SageMaker onto edge devices to run inference. With AWS IoT Greengrass, connected devices can run AWS Lambda functions, keep device data in sync, and communicate with other devices securely–even when not connected to the internet.

tflops

SUPPORTS

TensorFlow
MXNet