New Model Tracking Capabilities for Amazon SageMaker Are Now Generally Available

Posted on: Aug 2, 2019

New model tracking capabilities for Amazon SageMaker are now generally available in all AWS regions where Amazon SageMaker is available. With these new capabilities, you can quickly and easily find and compare your machine learning (ML) model training experiments. Using either the AWS Management Console or the AWS SDK, you can quickly search through thousands of model training experiments and compare metrics to evaluate performance across different iterations, accelerating your ability to identify the best performing models.

Developing ML models is an iterative process. You experiment with different combinations of data, algorithms, and parameters to fine tune the model. This continuous experimentation often results in a large number of model versions, making it difficult to keep track of the experiments and slowing down the discovery of the most effective model. Additionally, tracking the variables of a specific model version becomes tedious over time, hindering auditing and compliance verification. With the new model tracking capabilities in Amazon SageMaker, you can quickly identify the most relevant model by searching through different parameters including the learning algorithm, the hyperparameter settings, and any tags that have been added during training runs. You can also compare and rank training runs based on their performance metrics, such as training loss and validation accuracy, to quickly identify the highest performing models.

You can get started using our sample notebooks, and learn more about the feature in the blog and developer guide.