Amazon SageMaker Features
Machine learning for every data scientist and developer
Amazon SageMaker is a fully managed service that provides every developer and data scientist with the ability to prepare build, train, and deploy machine learning (ML) models quickly. SageMaker removes the heavy lifting from each step of the machine learning process to make it easier to develop high quality models. SageMaker provides all of the components used for machine learning in a single toolset so models get to production faster with much less effort and at lower cost.
Collect and Prepare Training Data
Using Amazon SageMaker Data Wrangler, you can quickly and easily prepare data and create model features. You can connect to data sources and use built-in data transformations to engineer model features.
Amazon SageMaker Clarify provides data to improve model quality through bias detection during data preparation and after training. SageMaker Clarify also provides model explainability reports so stakeholders can see how and why models make predictions.
Amazon SageMaker allows you to operate on a fully secure ML environment from day one. You can use a comprehensive set of security features to help support a broad range of industry regulations.
Amazon SageMaker Clarify provides data to improve model quality through bias detection during data preparation and after training. SageMaker Clarify also provides model explainability reports so stakeholders can see how and why models make predictions.
Amazon SageMaker allows you to operate on a fully secure ML environment from day one. You can use a comprehensive set of security features to help support a broad range of industry regulations.
Amazon SageMaker Ground Truth makes it easy to build highly accurate training datasets for machine learning. Get started with labeling your data in minutes through the SageMaker Ground Truth console using custom or built-in data labeling workflows including 3D point clouds, video, images, and text.
Amazon SageMaker Ground Truth makes it easy to build highly accurate training datasets for machine learning. Get started with labeling your data in minutes through the SageMaker Ground Truth console using custom or built-in data labeling workflows including 3D point clouds, video, images, and text.
Amazon SageMaker Feature Store is a purpose-built feature store for ML serving features in both real-time and in batch. You can securely store, discover, and share features so you get the same features consistently both during training and during inference, saving months of development effort.
Amazon SageMaker Processing extends the ease, scalability, and reliability of SageMaker to running data processing workloads. SageMaker Processing allows you to connect to existing storage, spin up the resources required to run your job, save the output to persistent storage, and provides logs and metrics.

Amazon SageMaker Feature Store is a purpose-built feature store for ML serving features in both real-time and in batch. You can securely store, discover, and share features o you get the same features consistently both during training and during inference, saving months of development effort.

Amazon SageMaker Processing extends the ease, scalability, and reliability of SageMaker to running data processing workloads. SageMaker Processing allows you to connect to existing storage, spin up the resources required to run your job, save the output to persistent storage, and provides logs and metrics.
Build Models
Amazon SageMaker Studio Notebooks are one-click Jupyter notebooks and the underlying compute resources are fully elastic, so you can easily dial up or down the available resources. Notebooks are shared with a single click so colleagues get the same notebook, saved in the same place.
Amazon SageMaker also offers over 15 built in algorithms available in pre-built container images that can be used to quickly train and run inference.

Amazon SageMaker Studio Notebooks are one-click Jupyter notebooks and the underlying compute resources are fully elastic, so you can easily dial up or down the available resources. Notebooks are shared with a single click so colleagues get the same notebook, saved in the same place.

Amazon SageMaker also offers over 15 built in algorithms available in pre-built container images that can be used to quickly train and run inference.
Amazon SageMaker JumpStart helps you quickly get started with ML using pre-built solutions that can be deployed with just a few clicks. SageMaker JumpStart also supports one-click deployment and fine-tuning of more than 150 popular open source models.
Amazon SageMaker Autopilot automatically builds, trains, and tunes the best machine learning models, based on your data while allowing to maintain full control and visibility. You then can directly deploy the model to production with just one click, or iterate to improve the model quality.
Amazon SageMaker JumpStart helps you quickly get started with ML using pre-built solutions that can be deployed with just a few clicks. SageMaker JumpStart also supports one-click deployment and fine-tuning of more than 150 popular open source models.
Amazon SageMaker Autopilot automatically builds, trains, and tunes the best machine learning models, based on your data while allowing to maintain full control and visibility. You then can directly deploy the model to production with just one click, or iterate to improve the model quality.
Amazon SageMaker is optimized for many popular deep learning frameworks such as TensorFlow, Apache MXNet, PyTorch, and more. Frameworks are always up-to-date with the latest version, and are optimized for performance on AWS. You don’t need to manually setup these frameworks and can use them within the built-in containers.

Amazon SageMaker is optimized for many popular deep learning frameworks such as TensorFlow, Apache MXNet, PyTorch, Chainer, and more. Frameworks are always up-to-date with the latest version, and are optimized for performance on AWS. You don’t need to manually setup these frameworks and can use them within the built-in containers.
Amazon SageMaker enables you to test and prototype locally. The Apache MXNet and TensorFlow Docker containers used in SageMaker are available on GitHub. You can download these containers and use the Python SDK to test scripts before deploying to training or hosting.
Amazon SageMaker supports reinforcement learning in addition to traditional supervised and unsupervised learning. SageMaker has built-in, fully-managed reinforcement learning algorithms, including some of the newest and best performing in the academic literature.
Amazon SageMaker enables you to test and prototype locally. The Apache MXNet and TensorFlow Docker containers used in SageMaker are available on GitHub. You can download these containers and use the Python SDK to test scripts before deploying to training or hosting.
Amazon SageMaker supports reinforcement learning in addition to traditional supervised and unsupervised learning. SageMaker has built-in, fully-managed reinforcement learning algorithms, including some of the newest and best performing in the academic literature.
Train and Tune Models
Amazon SageMaker Experiments helps you track iterations to ML models by capturing the input parameters, configurations, and results, and storing them as ‘experiments’. In SageMaker Studio you can browse active experiments, search for previous experiments review previous experiments with their results, and compare experiment results.
Amazon SageMaker Experiments helps you track iterations to ML models by capturing the input parameters, configurations, and results, and storing them as ‘experiments’. In SageMaker Studio you can browse active experiments, search for previous experiments review previous experiments with their results, and compare experiment results.
Amazon SageMaker Debugger captures metrics and profiles training jobs in real-time so you can correct performance problems quickly before the model is deployed to production.
Amazon SageMaker provides Managed Spot Training to help you to reduce training costs by up to 90%. Training jobs are automatically run when compute capacity becomes available and are made resilient to interruptions caused by changes in capacity.

Amazon SageMaker Debugger captures metrics and profiles training jobs in real-time so you can correct performance problems quickly before the model is deployed to production.

Amazon SageMaker provides Managed Spot Training to help you to reduce training costs by up to 90%. Training jobs are automatically run when compute capacity becomes available and are made resilient to interruptions caused by changes in capacity.
Amazon SageMaker can automatically tune your model by adjusting thousands of combinations of algorithm parameters to arrive at the most accurate predictions the model is capable of producing saving weeks of effort. Automatic model tuning uses machine learning to quickly tune your model.
Amazon SageMaker can automatically tune your model by adjusting thousands of combinations of algorithm parameters to arrive at the most accurate predictions the model is capable of producing saving weeks of effort. Automatic model tuning uses machine learning to quickly tune your model.
When its time to train, specify the location of data, indicate the type of SageMaker instances, and get started with a single click. SageMaker sets up a distributed compute cluster, performs the training, outputs results to Amazon S3, and tears down the cluster.
Amazon SageMaker makes it faster to perform distributed training. SageMaker helps split your data across multiple GPUs that achieves near-linear scaling efficiency. SageMaker also helps split your model across multiple GPUs by automatically profiling and partitioning your model with fewer than 10 lines of code.

When its time to train, specify the location of data, indicate the type of SageMaker instances, and get started with a single click. SageMaker sets up a distributed compute cluster, performs the training, outputs results to Amazon S3, and tears down the cluster.

Amazon SageMaker makes it faster to perform distributed training. SageMaker helps split your data across multiple GPUs that achieves near-linear scaling efficiency. SageMaker also helps split your model across multiple GPUs by automatically profiling and partitioning your model with fewer than 10 lines of code.
Deploy Models to Production
Build fully automated workflows for the complete machine learning (ML) lifecycle spanning data preparation, model training, and model deployment with Amazon SageMaker Pipelines.
Amazon SageMaker Model Monitor automatically detects concept drift in deployed models and provides detailed alerts that help identify the problem to you can improve model quality overtime. All models trained in SageMaker automatically emit key metrics that can be collected and viewed in SageMaker Studio.

Build fully automated workflows for the complete machine learning (ML) lifecycle spanning data preparation, model training, and model deployment with Amazon SageMaker Pipelines.

Amazon SageMaker Model Monitor automatically detects concept drift in deployed models and provides detailed alerts that help identify the problem to you can improve model quality overtime. All models trained in SageMaker automatically emit key metrics that can be collected and viewed in SageMaker Studio.
Many machine learning applications require humans to review low confidence predictions to ensure the results are correct. Amazon Augmented AI provides built-in human review workflows for common machine learning use cases.
Many machine learning applications require humans to review low confidence predictions to ensure the results are correct. Amazon Augmented AI provides built-in human review workflows for common machine learning use cases.
Amazon SageMaker Batch Transform eliminates the need to resize large datasets for batch processing jobs. Batch Transform allows you to run predictions on large or small batch datasets using a simple API.
You can use Amazon SageMaker, while also using Kubernetes and Kubeflow for orchestration and managing pipelines. SageMaker lets you train and deploy models in SageMaker using Kubernetes operators, and SageMaker Components for Kubeflow Pipelines enables you to use SageMaker without needing to manage Kubernetes for ML.

Amazon SageMaker Batch Transform eliminates the need to resize large datasets for batch processing jobs. Batch Transform allows you to run predictions on large or small batch datasets using a simple API.

You can use Amazon SageMaker, while also using Kubernetes and Kubeflow for orchestration and managing pipelines. SageMaker lets you train and deploy models in SageMaker using Kubernetes operators, and SageMaker Components for Kubeflow Pipelines enables you to use SageMaker without needing to manage Kubernetes for ML.
Amazon Elastic Inference allows you to attach just the right amount of GPU-powered inference acceleration to any Amazon SageMaker instance type with no code changes.
Amazon Elastic Inference allows you to attach just the right amount of GPU-powered inference acceleration to any Amazon SageMaker instance type with no code changes.
Amazon SageMaker makes it easy to deploy your trained model into production with a single click so that you can start generating predictions for real-time or batch data. You can one-click deploy your model onto auto-scaling instances across multiple availability zones for high redundancy.
Amazon SageMaker provides a scalable and cost effective way to deploy large numbers of custom machine learning models. SageMaker Multi-Model endpoints enable you to deploy multiple models with a single click on a single endpoint and serve them using a single serving container.

Amazon SageMaker makes it easy to deploy your trained model into production with a single click so that you can start generating predictions for real-time or batch data. You can one-click deploy your model onto auto-scaling instances across multiple availability zones for high redundancy.

Amazon SageMaker provides a scalable and cost effective way to deploy large numbers of custom machine learning models. SageMaker Multi-Model endpoints enable you to deploy multiple models with a single click on a single endpoint and serve them using a single serving container.
Amazon SageMaker enables you to deploy Inference Pipelines so you can pass raw input data and execute pre-processing, predictions, and post-processing on real-time and batch inference. You can build feature data processing and feature engineering pipelines, and deploy these as part of the Inference Pipelines.
Amazon SageMaker enables you to deploy Inference Pipelines so you can pass raw input data and execute pre-processing, predictions, and post-processing on real-time and batch inference. You can build feature data processing and feature engineering pipelines, and deploy these as part of the Inference Pipelines.
Machine Learning at the Edge
With Amazon SageMaker Neo, you can train your ML models once and deploy them in the cloud or the edge. SageMaker Neo uses ML to optimize a trained model to run up to twice as fast and consumes less than a 1/10 of the memory.
Amazon SageMaker Edge Manager makes it easy to monitor and manage models running on edge devices. SageMaker Edge Manager automatically samples data from devices and sends it securely to the cloud for monitoring, labeling, and retraining so you can continuously improve model quality.

With Amazon SageMaker Neo, you can train your ML models once and deploy them in the cloud or the edge. SageMaker Neo uses ML to optimize a trained model to run up to twice as fast and consumes less than a 1/10 of the memory.

Amazon SageMaker Edge Manager makes it easy to monitor and manage models running on edge devices. SageMaker Edge Manager automatically samples data from devices and sends it securely to the cloud for monitoring, labeling, and retraining so you can continuously improve model quality.