Amazon SageMaker

Machine learning for every developer and data scientist.

Amazon SageMaker provides every developer and data scientist with the ability to build, train, and deploy machine learning models quickly. Amazon SageMaker is a fully-managed service that covers the entire machine learning workflow to label and prepare your data, choose an algorithm, train the algorithm, tune and optimize it for deployment, make predictions, and take action. Your models get to production faster with much less effort and lower cost.

BUILD

Collect & prepare training data

Data labeling & pre-built notebooks for common problems

Choose & optimize your ML algorithm

Model & algorithm marketplace & built-in, high-performance algorithms

TRAIN

Setup & manage environments for training

One-click training on the highest performing infrastructure

Train & tune model

Train once, run anywhere & model optimization

DEPLOY

Deploy model in production

One-click deployment

Scale & manage the production environment

Fully managed with auto-scaling for 75% less

Featured customers

logo-sony
logo-intuit
logo-playstation
logo-nfl
logo-statefarm
logo-liberty_mutual
logo-FI
logo-coinbase
logo-roche
logo-convoy
logo-korean_air

Collect and prepare training data

Label training data fast

Amazon SageMaker Ground Truth helps you build and manage highly accurate training datasets quickly. Ground Truth offers easy access to public and private human labelers and provides them with pre-built workflows and interfaces for common labeling tasks. Additionally, Ground Truth will learn from human labels to make high quality, automatic annotations to significantly lower labeling costs.

Learn more »
pull-70p
infographic-Ground_Truth
transparent-img-code
Hosted notebooks

Fully-managed Jupyter notebooks with dozens of pre-built workflows and
examples to make it easy to explore and visualize your training data.

Choose and optimize your machine learning algorithm

Amazon SageMaker automatically configures and optimizes TensorFlow, Apache MXNet, PyTorch, Chainer, Scikit-learn, SparkML, Horovod, Keras, and Gluon. Commonly used machine learning algorithms are built-in and tuned for scale, speed, and accuracy with over a hundred additional pre-trained models and algorithms available in AWS Marketplace. You can also bring any other algorithm or framework by building it into a Docker container.

logo-tesnorflow
logo-mxnet
logo-pytorch
logo-chainer
logo-keras
logo-gluon
logo-horovod
pull-10x

Setup and manage training environments

SageMaker_Chart_3

One-click training

Begin training your model with a single click. Amazon SageMaker handles all of the underlying infrastructure to scale up to petabyte sized datasets easily.

Amazon EC2 P3 instances provide 8 NVIDIA Tesla V100 GPUs optimized for the fastest distributed machine learning in the cloud.
THE HIGHEST PERFORMING GPU INSTANCE
in the cloud
25 GBPS
networking throughput
64 SCALABLE vCPUs
Intel® Xeon® Skylake with AVX-512
16 GB OF MEMORY
per GPU

The best place to run TensorFlow

AWS’ TensorFlow optimizations to provide near-linear scaling efficiency across hundreds of GPUs to operate at cloud scale without a lot of processing overhead to train more accurate, more sophisticated models in much less time.

STOCK TENSORFLOW
img-65p
AWS-OPTIMIZED TENSORFLOW
img-90p

Scaling Efficiency with 256 GPUs

AWS SageMaker is the best place to run TensorFlow in the cloud
FULLY MANAGED
training and hosting
NEAR-LINEAR SCALING
across hundreds of GPUs
75% LOWER INFERENCE COSTS
with Amazon Elastic Inference

Tune and optimize your model

Automatically tune your model

Automatic Model Tuning uses machine learning to quickly tune your model to be as accurate as possible. This capability lets you skip the tedious trial-and-error process of manually adjusting model parameters. Instead, over multiple training runs, Automatic Model Tuning perform hyperparameter optimization by discovering interesting features in your data and learning how those features interact to affect accuracy. You save days–or even weeks–of time maximizing the quality of your trained model.

Train once, run anywhere

Amazon SageMaker Neo lets you train a model once, and deploy it anywhere. Using machine learning, SageMaker Neo will automatically optimize any trained model built with a popular framework for the hardware platform you specify with no loss in accuracy. You can then deploy your model to EC2 instances and SageMaker instances, or any device at the edge that includes the Neo runtime, including AWS Greengrass devices.

Learn more »

pull-2x
pull-1-10
img-neo

Deploy and manage models in production

One-click deploy to production

Amazon SageMaker makes it easy to deploy your trained model in production with a single click so that you can start generating predictions (a process called inference) for real-time or batch data. Your model runs on auto-scaling clusters of Amazon SageMaker instances that are spread across multiple availability zones to deliver both high performance and high availability. Amazon SageMaker also includes built-in A/B testing capabilities to help you test your model and experiment with different versions to achieve the best results.

Run models at the edge

AWS Greengrass makes it easy to deploy models trained with Amazon SageMaker onto edge devices to run inference. With AWS Greengrass, connected devices can run AWS Lambda functions, keep device data in sync, and communicate with other devices securely–even when not connected to the internet.

pull-75p

Reduce your deep learning inference costs by up to 75% using Amazon Elastic Inference to attach elastic GPU acceleration to your Amazon SageMaker instances easily. For most models, a full GPU instance is over-sized for inference. Also, it can be difficult to optimize the GPU, CPU, and memory needs of your deep learning application with a single instance type. Elastic Inference allows you to choose the instance type that is best suited to the overall CPU and memory needs of your application, and then separately configure the right amount of GPU acceleration required for inference.

Learn more »

img-TFLOPS

SUPPORTS

logo-tesnorflow
logo-mxnet

Customer success

Build what’s next with fully-managed reinforcement learning

small-RL-icon

Use reinforcement learning (RL) to build sophisticated models that can achieve specific outcomes without the need for pre-labeled training data. RL is useful for situations where there isn’t a “right” answer to learn from, but there is an optimal outcome like learning to drive a car or make positive financial trades. Rather than looking at historical data, RL algorithms learn by taking actions in a simulator where rewards and penalties help direct the model toward the desired behavior.

Amazon SageMaker RL includes built-in, fully-managed RL algorithms. SageMaker supports RL in multiple frameworks, including TensorFlow and MXNet, as well as custom developed frameworks designed from the ground up for reinforcement learning, such as Intel Coach, and Ray RLlib.

Amazon SageMaker RL also supports multiple RL environments, including full 2D and 3D physics environments, commercial simulation environments such as MATLAB and Simulink, and anything that supports the open source OpenAI Gym interface, including custom developed environments. Additionally, SageMaker RL will allow you to train using virtual 3D environments built in Amazon Sumerian and AWS RoboMaker. This means you can model everything from advertising and financial systems to industrial controls, robotics, and autonomous vehicles.

Open and flexible

Machine learning your way

Machine learning technology moves fast, and you should stay flexible with access to a broad set of frameworks and tools. With Amazon SageMaker, you can use the built-in containers for any popular framework or bring your preferred framework. Either way, Amazon SageMaker will fully manage the underlying infrastructure required to build, train, and deploy your models.

Better edge performance

The capabilities of SageMaker Neo are also available for every developer through the open source Neo project. We believe that making it possible for anyone to run models anywhere is a necessary step to allow machine learning to realize its full potential. By contributing to the open source effort, hardware vendors can improve Neo with new optimizations and advance the overall hardware ecosystem for machine learning.

SageMaker fits your workflow

Under the hood, Amazon SageMaker is made of separate components: Ground Truth, Notebooks, Training, Neo, and Hosting. These components are designed to work together to provide an end-toend machine learning service. However, they can also be used independently to supplement existing machine learning workflows or to support models that run in your data center or at the edge.

Learn and accelerate

ImgHead_Silverstone_TEST_Car_3

AWS DeepRacer

A fully autonomous, 1/18th scale race car, packed full of everything you need to learn about reinforcement learning through autonomous driving.

Learn more »

AWS DeepLens

Learn computer vision through projects, tutorials, and real-world, hands-on exploration with the world’s first deep learning enabled video camera for developers.

Learn more »

AWS Machine Learning Training & Certification

AWS Machine Learning University. Structured courses for machine learning based on the same material used to train Amazon developers through the combination of foundation knowledge and real-world application.

Learn more »

Amazon ML Solutions Lab

Amazon ML Solutions Lab pairs your team with machine learning experts from Amazon. It combines hands-on educational workshops with brainstorming sessions and professional advisory services to help you ‘work backwards’ from business challenges, and then go step-by-step through the process of getting a model into production. Afterward, you will be able to take what you have learned and use it elsewhere in your organization to pursue additional opportunities.

Learn more »