Machine learning for every developer and data scientist.
Amazon SageMaker provides every developer and data scientist with the ability to build, train, and deploy machine learning models quickly. Amazon SageMaker is a fully-managed service that covers the entire machine learning workflow to label and prepare your data, choose an algorithm, train the model, tune and optimize it for deployment, make predictions, and take action. Your models get to production faster with much less effort and lower cost.
Collect and prepare training data
Label training data fast
Amazon SageMaker Ground Truth helps you build and manage highly accurate training datasets quickly. Ground Truth offers easy access to public and private human labelers and provides them with pre-built workflows and interfaces for common labeling tasks. Additionally, Ground Truth will learn from human labels to make high quality, automatic annotations to significantly lower labeling costs.
Fully-managed Jupyter notebooks that you can use in the cloud or on your local machine to explore and visualize your data and develop your model. In addition to starting from scratch, you can choose from dozens of pre-built notebooks that you can use as-is, or modify to suit your specific needs, to make it easy to explore and visualize your training data quickly. Solutions are available for many common problems such as recommendations and personalization, fraud detection, forecasting, image classification, churn prediction, customer targeting, log processing and anomaly detection, and speech-to-text.
Choose and optimize your machine learning algorithm
Amazon SageMaker automatically configures and optimizes TensorFlow, Apache MXNet, PyTorch, Chainer, Scikit-learn, SparkML, Horovod, Keras, and Gluon. Commonly used machine learning algorithms are built-in and tuned for scale, speed, and accuracy with over 200 additional pre-trained models and algorithms available in AWS Marketplace. You can also bring any other algorithm or framework by building it into a Docker container.
Setup and manage training environments
Begin training your model with a single click. Amazon SageMaker handles all of the underlying infrastructure to scale up to petabyte sized datasets easily.
The fastest distributed machine learning in the cloud.
Amazon EC2 P3 instances provide 8 NVIDIA Tesla GPUs.
The best place to run TensorFlow
AWS’ TensorFlow optimizations provide near-linear scaling efficiency across hundreds of GPUs to operate at cloud scale without a lot of processing overhead to train more accurate, more sophisticated models in much less time.
Scaling Efficiency with 256 GPUs
Tune and optimize your model
Automatically tune your model
Train once, run anywhere
Amazon SageMaker Neo lets you train a model once, and deploy it anywhere. Using machine learning, SageMaker Neo will automatically optimize any trained model built with a popular framework for the hardware platform you specify with no loss in accuracy. You can then deploy your model to EC2 instances and SageMaker instances, or any device at the edge that includes the Neo runtime, including AWS IoT Greengrass devices.
Both the Neo compiler and runtime are also available as open source software.
Deploy and manage models in production
One-click deploy to production
Amazon SageMaker makes it easy to deploy your trained model in production with a single click so that you can start generating predictions (a process called inference) for real-time or batch data. Your model runs on auto-scaling clusters of Amazon SageMaker instances that are spread across multiple availability zones to deliver both high performance and high availability. Amazon SageMaker also includes built-in A/B testing capabilities to help you test your model and experiment with different versions to achieve the best results.
Reduce your deep learning inference costs by up to 75% using Amazon Elastic Inference to attach elastic GPU acceleration to your Amazon SageMaker instances easily. For most models, a full GPU instance is over-sized for inference. Also, it can be difficult to optimize the GPU, CPU, and memory needs of your deep learning application with a single instance type. Elastic Inference allows you to choose the instance type that is best suited to the overall CPU and memory needs of your application, and then separately configure the right amount of GPU acceleration required for inference.
Run models at the edge
AWS IoT Greengrass makes it easy to deploy models trained with Amazon SageMaker onto edge devices to run inference. With AWS IoT Greengrass, connected devices can run AWS Lambda functions, keep device data in sync, and communicate with other devices securely–even when not connected to the internet.