Posted On: Oct 28, 2022
You can now monitor the quality of machine learning predictions from Batch Transform jobs in SageMaker using Amazon SageMaker Model Monitor. Amazon SageMaker Model Monitor provides a fully managed experience to monitor models in production, detect deviations, and take timely actions such as auditing or retraining.
Amazon SageMaker Batch Transform enables you to run predictions on datasets stored in Amazon S3. It is ideal for scenarios where you are working with large datasets and don’t need a persistent inference endpoint. After models are deployed in production, the data in the real-world may deviate over time from the data that was used to train the model, which may eventually lead to lower model accuracy. For example, changes in macro-economic conditions such as interest rates could impact the quality of model used to predict housing prices. Model Monitor can detect drift in data quality, model quality, bias, and feature attribution, and alert you to take remedial actions when such changes occur.
With Amazon SageMaker Model Monitor, you can collect Batch Transform data in production, analyze it, and compare it against your training or validation data to detect deviations. You can use SageMaker Model Monitor’s built-in rules to detect drift right away for structured data sets, add data transformations before you run the built-in rules, or write your own custom rules. You can use Model Monitor with a broad range of instance types, can schedule it to run at a regular cadence(eg. hourly/daily) or on-demand, push summary metrics to Amazon CloudWatch, and set alerts and triggers for corrective actions.
Amazon SageMaker Model Monitor for Batch Transform jobs is available in all commercial regions where Amazon SageMaker is available. Visit the Amazon SageMaker developer guide for more information and sample notebooks.