Posted On: May 25, 2021

Amazon SageMaker Pipelines, the first purpose-built continuous integration and continuous delivery (CI/CD) service for machine learning (ML), is now integrated with SageMaker Experiments, a capability that lets customers organize, track, compare, and evaluate their ML experiments. Customers can now compare metrics such as model training accuracy across multiple executions of their SageMaker Pipelines just as easily as they compare such metrics across multiple trials of a ML model training experiment. SageMaker Pipelines automatically creates an Experiment with the pipeline name and an Experiment trial for every execution of the pipeline. The creation of an experiment for a pipeline and a trial for every pipeline execution is turned on by default. You can choose to opt-out of the auto-creation.

Additionally, customers can now use the SageMaker Experiments Python SDK to log Receiver Operating Characteristic (ROC) metric, Precision-recall (PR) metrics, confusion matrix, and tabular data in their SageMaker training jobs. The corresponding plots of ROC curves, PR curve, and the confusion matrix can now be viewed in the SageMaker Pipeline node inspector.

This feature is available in all AWS regions where Amazon SageMaker is available. To learn more, visit the documentation page.