Organize, track, and compare your machine learning training experiments with Amazon SageMaker Experiments

Posted on: Dec 3, 2019

Amazon SageMaker Experiments is a new capability that lets you organize, track, and compare your machine learning training experiments on Amazon SageMaker.

Machine learning is an iterative process. You need to experiment with several combinations of data, algorithm and parameters, all the while observing the impact of such incremental changes on model accuracy. Over time, this iterative experimentation can result in hundreds and thousands of model training runs and model versions, making it hard to track the best performing models and their input configurations. It’s also difficult to compare your active experiments with past attempts to identify opportunities for further incremental improvements.

Amazon SageMaker Experiments makes it easy to manage your machine learning experiments. It automatically tracks the inputs, parameters, configurations, and results of all your iterations as trials. You can also assign, group and organize these trials into experiments. SageMaker Experiments is integrated with Amazon SageMaker Studio providing you a visual interface to browse your active and past experiments, compare experiments on key performance metrics, and identify the best performing ones. SageMaker Experiments also comes with a Python SDK that makes these search and analytics capabilities easily accessible in SageMaker Notebooks. Furthermore, since SageMaker Experiments enables tracking of all the steps and artifacts that went into creating and certifying a model, you can quickly track the lineage of a model when you are troubleshooting issues in production or auditing your models for compliance.

Amazon SageMaker Experiments is available in all AWS commercial regions around the world, where Amazon SageMaker is available, at no additional charge. To learn more, read the blog post here and refer to the documentation to get started.