Analyze and compare ML training iterations to choose the best performing model
SageMaker Experiments is a managed service for tracking and analyzing ML experiments at scale.
Log experiments performed in any IDE
ML experiments are performed in diverse environments such as local notebooks and IDEs, training code running in the cloud, or managed IDEs in the cloud such as SageMaker Studio. With SageMaker Experiments, you can start tracking your experiments centrally from any environment or IDE using only a few lines of data scientist friendly Python code.
Centrally manage ML experiments metadata
The process of developing an ML model involves experimenting with various combinations of data, algorithms, and parameters, while evaluating the impact of incremental changes on model performance. With Sagemaker Experiments, you can track your ML iterations and save all the related metadata such as metrics, parameters and artifacts in a central place automatically.
Finding the best model from multiple iterations requires analysis and comparison of model performance. SageMaker Experiments provide visualizations such as scatter plots, bar charts, and histograms. In addition, Sagemaker Experiments SDK lets you load the logged data in your notebook for offline analysis.
Build models collaboratively
Team centric collaboration within the organization is key to a successful data science project. SageMaker Experiments is integrated with SageMaker Studio, allowing team members to access the same information and confirm that the experiment results are consistent, making collaboration easier. Use SageMaker Studio's search capability to quickly find relevant experiments from the past.
Reproduce and audit ML experiments
When the performance of a model changes, you need to understand the root cause of the change. Sometimes you want to document the model development process so that it can be reproduced and easily tested. Using Sagemaker Experiments you can access and reproduce your ML workflow from the experiments you’ve tracked.