Amazon SageMaker Experiments

Efficiently manage machine learning experiments

Free tier

100,000 metric records ingested per month, 1 million metric records retrieved (via APIs) per month, and 100,000 metric records stored per month. The free tier is available for the first 6 months.

Analyze and compare ML training iterations to choose the best performing model

Keep track of parameters, metrics and artifacts to troubleshoot and reproduce models
Provide your team a central environment to work on ML experiments to improve productivity

SageMaker Experiments is a managed service for tracking and analyzing ML experiments at scale.

How it works

Log experiments performed in any IDE

ML experiments are performed in diverse environments such as local notebooks and IDEs, training code running in the cloud, or managed IDEs in the cloud such as SageMaker Studio. With SageMaker Experiments, you can start tracking your experiments centrally from any environment or IDE using only a few lines of data scientist friendly Python code.

Centrally manage ML experiments metadata

The process of developing an ML model involves experimenting with various combinations of data, algorithms, and parameters, while evaluating the impact of incremental changes on model performance. With Sagemaker Experiments, you can track your ML iterations and save all the related metadata such as metrics, parameters and artifacts in a central place automatically.

Evaluate experiments

Finding the best model from multiple iterations requires analysis and comparison of model performance. SageMaker Experiments provide visualizations such as scatter plots, bar charts, and histograms. In addition, Sagemaker Experiments SDK lets you load the logged data in your notebook for offline analysis.

Build models collaboratively

Team centric collaboration within the organization is key to a successful data science project. SageMaker Experiments is integrated with SageMaker Studio, allowing team members to access the same information and confirm that the experiment results are consistent, making collaboration easier. Use SageMaker Studio's search capability to quickly find relevant experiments from the past.

Reproduce and audit ML experiments

When the performance of a model changes, you need to understand the root cause of the change. Sometimes you want to document the model development process so that it can be reproduced and easily tested. Using Sagemaker Experiments you can access and reproduce your ML workflow from the experiments you’ve tracked.

How to Get Started

guide

Find out how SageMaker Experiments works

Learn more about experiments management, logging metadata, and analysis.

blog

Organize, track, and compare your ML training interations