Posted On: Dec 8, 2020
Amazon SageMaker Model Monitor continuously monitors machine learning models for concept drift (i.e. changes in data distribution and characteristics over time) and alerts you if there are any deviations so you can take remedial action. Starting today, you can also use Amazon SageMaker Model Monitor to detect drift in model quality, bias, and feature importance. With these new fully managed capabilities, SageMaker Model Monitor helps you maintain high quality machine learning models in production.
Amazon SageMaker Model Monitor currently supports detecting data quality drift by tracking the difference between data that was used to train the models versus the data that is being presented to the model to score and alerting you of deviations to help you take timely actions such as auditing data or retraining models. Today, we are adding three new capabilities to SageMaker Model Monitor, enabling you to detect drift in model quality, model bias, and feature importance.
With model quality monitoring, you can monitor model characteristics (such as precision, accuracy, recall, and more) of your ML models in real time. SageMaker Model monitor reports how well a ML model is predicting outcomes by comparing model prediction to ground truth data. As the model is monitored, you can view exportable reports and graphs detailing model quality in Amazon S3, Amazon SageMaker Studio, and SageMaker Notebook instance. You can also configure Amazon CloudWatch to receive notifications if drift in model quality is observed.
Bias monitoring helps you detect bias in your ML models on a regular basis. SageMaker Model Monitor periodically determines when bias metrics drift into levels that statistically exceed preset thresholds. With bias monitoring capabilities in Model Monitor, you can view metrics and visualize the results in SageMaker Studio. You can also configure automated alerts so that you can immediately know when your model exceeds bias metric thresholds you have set.
After models are deployed in production, the importance and impact of certain features in the model can change over time. Model explainablility monitoring helps you understand and interpret whether the predictions made by your ML models are based on the same features, and in the same proportion, as when your model was trained. When you enable explainability tracking, SageMaker Model Monitor automatically detects drift in relative importance of features, enables you to visualize these changes in SageMaker Studio, and, like all other SageMaker Model Monitor features, can be configured with Amazon CloudWatch to proactively alert you when drift is detected.
Amazon SageMaker Model Monitor can be enabled for new or existing real-time inference endpoints. Once enabled, SageMaker Model Monitor saves prediction requests and responses in Amazon S3, compares the model predictions with actuals or ground truth you provide, runs built-in or custom rules to detect drift against a baseline, and alerts you when there are deviations. As a result, you can monitor hundreds of models for drift in data quality, model quality, model bias, and feature importance in a standardized way across your organization without having to build any additional tooling. Monitoring jobs can be scheduled to run at a regular cadence (for example hourly or daily) and push reports as well as metrics to Amazon CloudWatch and Amazon S3. The monitoring results are also available in Amazon SageMaker Studio for visual inspection and you can also further analyze the results using an Amazon SageMaker Notebook Instance.
Amazon SageMaker Model Monitor is available in all commercial regions where Amazon SageMaker is available. You also get up to 30 hours of monitoring aggregated across all endpoints each month, at no charge, when you use built-in monitoring rules with the default ml.m5.xlarge instance. Read the documentation for more information and for sample notebooks.