Accelerate model development

Provision standardized data science environments

Standardizing ML development environments increases data scientist productivity and ultimately the pace of innovation by making it easy to launch new projects, rotate data scientists across projects, and implement ML best practices. Amazon SageMaker Projects offers templates to quickly provision standardized data scientist environments with well-tested and up-to-date tools and libraries, source control repositories, boilerplate code, and CI/CD pipelines.

Collaborate across data science teams on experiments

ML model building is an iterative process that involves training hundreds of different models in search of the best algorithm, model architecture, and parameters to achieve the required level of prediction accuracy. You can track the inputs and outputs across these training iterations to improve repeatability of trials and collaboration between data scientists using Amazon SageMaker Experiments, a fully managed ML experiment management feature.

SageMaker Experiments tracks parameters, metrics, datasets, and other artifacts related to your model training jobs. It offers a single interface where you can visualize your in-progress training jobs, share experiments with colleagues, and deploy models directly from an experiment.

Automate ML training workflows

Automating training workflows helps you create a repeatable process to orchestrate model development steps for rapid experimentation and model re-training. You can automate the entire model build workflow, including data preparation, feature engineering, model training, model tuning, and model validation, using Amazon SageMaker Pipelines. You can configure SageMaker Pipelines to run automatically at regular intervals or when certain events are triggered, or you can run them manually as needed.

Easily deploy and manage models in production

Quickly reproduce your models for troubleshooting

Often, you need to reproduce models in production to troubleshoot model behavior and determine the root cause. To help with this, Amazon SageMaker logs every step of your workflow, creating an audit trail of model artifacts, such as training data, configuration settings, model parameters, and learning gradients. Using lineage tracking, you can recreate models to debug potential issues.

Centrally track and manage model versions

Building an ML application involves developing models, data pipelines, training pipelines, and validation tests. Using Amazon SageMaker Model Registry, you can track model versions, their metadata such as use case grouping, and model performance metrics baselines in a central repository where it is easy to choose the right model for deployment based on your business requirements. In addition, SageMaker Model Registry automatically logs approval workflows for audit and compliance.

Define ML infrastructure through code

Orchestrating infrastructure through declarative configuration files, commonly referred to as “infrastructure-as-code,” is a popular approach to provisioning ML infrastructure and implementing solution architecture exactly as specified by CI/CD pipelines or deployment tools. Using Amazon SageMaker Projects, you can write infrastructure-as-code using pre-built templates files.

Automate integration and deployment (CI/CD) workflows

ML development workflows should integrate with integration and deployment workflows to rapidly deliver new models for production applications. Amazon SageMaker Projects brings CI/CD practices to ML, such as maintaining parity between development and production environments, source and version control, A/B testing, and end-to-end automation. As a result, you put a model to production as soon as it is approved and increase agility. 

In addition, Amazon SageMaker offers built-in safeguards to help you maintain endpoint availability and minimize deployment risk. SageMaker takes care of setting up and orchestrating deployment best practices such as Blue/Green deployments to maximize availability and integrates them with endpoint update mechanisms, such as auto rollback mechanisms, to help you automatically identify issues early and take corrective action before they significantly impact production.

Continuously retrain models to maintain prediction quality

Once a model is in production, you need to monitor its performance by configuring alerts so an on-call data scientist can troubleshoot the issue and trigger retraining. Amazon SageMaker Model Monitor helps you maintain quality by detecting model drift and concept drift in real time and sending you alerts so you can take immediate action. SageMaker Model Monitor constantly monitors model performance characteristics such as accuracy, which measures the number of correct predictions compared to the total number of predictions, so you can address anomalies. SageMaker Model Monitor is integrated with SageMaker Clarify to improve visibility into potential bias.

Optimize model deployment for performance and cost

Amazon SageMaker makes it easy to deploy ML models for inference at high performance and low cost for any use case. It provides a broad selection of ML infrastructure and model deployment options to meet all your ML inference needs.

Customer success


NatWest Group, a major financial services institution, standardized its ML model development and deployment process across the organization, reducing the turnaround cycle to create new ML environments from 40 days to 2 days and accelerating time to value for ML use cases from 40 to 16 weeks.


"Rather than creating many manual processes, we can automate most of the machine learning development process simply within Amazon SageMaker Studio." 

Cherry Cabading, Global Senior Enterprise Architect – AstraZeneca


Employing AWS services, including Amazon SageMaker, Janssen implemented an automated MLOps process that improved the accuracy of model predictions by 21 percent and increased the speed of feature engineering by approximately 700 percent, helping Janssen to reduce costs while increasing efficiency.


“Amazon SageMaker improves the efficiency of our MLOps teams with the tools required to test and deploy machine learning models at scale.”

Samir Joshi, ML Engineer – Qualtrics


What's new

Stay up to date with the latest SageMaker MLOps announcements


MLOps foundation roadmap for enterprises


SageMaker Friday episode: Automate ML workflows


Detect NLP data drift using SageMaker Model Monitor


Build MLOps workflows with SageMaker Projects and GitLab pipelines

Get started with a demo

Watch this demo to learn how to automate MLOps with SageMaker Projects.

Watch video 
Amazon Pinpoint getting started tutorial
Try a hands-on tutorial

Follow along this step-by-step tutorial to automate an ML workflow.

Learn more 
Start building in the console

Get started building with SageMaker in the AWS Management Console.

Sign in 

What's new

  • Date (Newest to Oldest)
No results found