Posted On: Dec 1, 2021

Amazon SageMaker Pipelines, a fully managed service that enables you to create, automate, and manage end-to-end machine learning (ML) workflows, now supports integration with Amazon SageMaker Model Monitor and Amazon SageMaker Clarify. With these integrations, you can easily incorporate model quality and bias detection in your ML workflow. The increased automation can help reduce your operational burden in building and managing ML models.

SageMaker Model Monitor and SageMaker Clarify enable you to continuously monitor the quality and bias metrics of ML models in production so that you can set up alerts or trigger retraining when the model or the data quality drifts. To set up model monitoring, you must establish a baseline metric for data and model quality that SageMaker Model Monitor can then use to measure drift. With the new integration, you can automatically capture the baselines for model and data quality as part of the model building pipeline, eliminating the need to calculate these metrics outside the model building workflow. You can also use QualityCheckStep and ClarifyCheckStep in SageMaker Pipelines to stop the model training pipeline, if any deviation from previously known baseline metric is detected. Once computed, you can also store and view calculated quality and bias metrics along with the baselines in the Model Registry.

This integration is also available as a template in SageMaker Projects so that you can automatically schedule model monitoring and bias detection jobs leveraging the baseline metrics that are recorded in the Model Registry. To get started, create a new SageMaker Project from the SageMaker Studio or the command-line interface using the new model-monitoring template. To learn more, visit our documentation page on check-steps in Sagemaker Pipelines, metrics/baselines in Model RegistrySagemaker Model Monitor and model-monitoring CI/CD template.