AWS Marketplace
Monitoring data quality in third-party models with Amazon SageMaker Model Monitor
Building, training, and deploying machine learning models from scratch can be a time-consuming and costly endeavor for some customers. Moreover, once deployed to production, machine learning models need to be continuously monitored for deviations in model and data quality.
To help you expedite model deployment and implement a model monitoring solution, you can integrate pre-trained models that support CSV and flat JSON input. These models are available in AWS Marketplace with Amazon SageMaker Model Monitor.
AWS Marketplace offers hundreds of pre-trained models with a range of capabilities, such as object detection, buyer propensity, natural language processing, data extraction, and feature engineering.
Amazon SageMaker Model Monitor offers built-in analysis based on statistical rules to detect drifts in data and model quality. Data quality deviations monitored by SageMaker Model Monitor include anomalous data types and incomplete data such as data with a high percentage of null values. Data with too many or too few columns can also affect data quality.
In this blog post, I will demonstrate how to subscribe to a pre-trained third-party model from AWS Marketplace. I’ll also show how to configure a Data Quality monitoring schedule using Amazon SageMaker Model Monitor.
Solution Overview
For this demo, I deployed a buyer propensity model from AWS Marketplace. This model predicts the probability that a consumer is planning to purchase a home based on their gender, age range, household income range, and zip code. Here’s an overview of the steps I’ll walk through:
- Subscribe to the Propensity-Planning to Buy a House model in AWS Machine Learning Marketplace.
- Setup the model endpoint and endpoint configuration in the SageMaker console.
- Create a Data Quality monitoring schedule.
- Invoke the model’s inference endpoint with sample data via a Jupyter notebook.
- View the data quality monitoring job details and visualize feature distribution statistics.
Prerequisites
- An active AWS account
- Access to run this demo in an Amazon SageMaker Studio notebook
- Basic understanding of Machine Learning models
- Familiarity with Jupyter notebooks and Python
- Familiarity with the Amazon SageMaker console and SageMaker Studio
Solution walkthrough: Monitoring data quality in third-party machine learning models with Amazon SageMaker Model Monitor
Step 1: Subscribe to the Propensity-Planning to Buy a House machine learning model in AWS Marketplace
- Open the AWS Marketplace listing Propensity-Planning to Buy a House(V 1.0) and choose Continue to Subscribe.
- On the Configure and launch page, select the SageMaker console launch method, and then choose View in Amazon SageMaker.
Step 2: Create a real-time inference endpoint
- For Step 1 on the Create endpoint page, enter a unique model name. I entered third-party-model. Be sure the selected IAM role has sufficient permissions, or let SageMaker create the role. Choose Next.
- For Step 2 on the Create endpoint page, update the following form fields:
- Enter a unique endpoint name: third-party-model-endpoint
- Next, create a New endpoint configuration.
- Update the Endpoint configuration name to: third-party-model-endpoint-config
- To capture prediction request and response information, toggle the Enable data capture radio button.
- Enter the S3 location to store data collected: s3://{bucket}/third-party-model/datacapture
- For the Sampling percentage (%), enter 100
- For the Capture content type, in the CSV/Text text area, enter text/csv
- Choose Create endpoint configuration. At the bottom of the page, choose Submit.
It may take a few minutes for the model’s endpoint status to change from Creating to InService.
Amazon SageMaker Model Monitor currently provides the following types of monitoring:
- Monitor Data Quality: Detect drifts in data quality such as deviations from baseline data types.
- Monitor Model Quality: Monitor drift in model quality metrics, such as accuracy.
- Monitor Bias Drift for Models in Production: Monitor bias in your model’s predictions.
- Monitor Feature Attribution Drift for Models in Production: Monitor drift in feature attribution.
For this blog post, I will focus on Data Quality monitoring.
Step 3: Enable model monitoring for data quality
After I subscribed to the machine learning model from AWS Marketplace and deployed the model’s real-time inference endpoint, I enabled Data Quality model monitoring for the model’s endpoint. To establish a monitoring baseline, I modified the sample dataset provided by the seller for this demo.
If you want to follow along, download the baseline training dataset and data drift dataset files and upload them to your S3 bucket.
When testing a machine learning model from AWS Marketplace, to ensure it meets your business needs, thoroughly evaluate the performance and model quality based on your ground-truth dataset. For this demo, I am using a curated dataset, which I modified from the model’s vendor. In production, you would use your own ground truth data. You can also use your ground-truth dataset for establishing a model monitoring baseline.
- When you complete Step 2 and the model’s endpoint status reads InService, open SageMaker Studio. In the left sidebar, select the Components and registries icon, and then select Endpoints from the drop-down menu.
- To open the Model Monitoring tab, open the context menu (right-click) the endpoint name and select Describe Endpoint. In my case, I chose third-party-model-endpoint.
- Update the respective S3 locations in the Data quality tab:
- S3 output location: This is where the data quality monitoring data is stored. I chose s3://{bucket}/third-party-model/reports.
- Baseline dataset S3 location: This is the source of the baseline training data. I chose s3://{bucket}/third-party-model/train/train.csv. Here’s the baseline training dataset used for this demo.
- Baseline S3 output location: This is the output of a baseline job. I chose s3://{bucket}/third-party-model/baselining.
- In the Advanced Settings, I updated the Schedule expression to Hourly and renamed the Schedule name by entering third-party-model-data-quality-schedule.
- Update Stopping condition (Seconds) from 86400 to 3600. This provides an hourly schedule.
- To complete the setup, choose Enable Model Monitoring.
Amazon SageMaker has a buffer period of 20 minutes to schedule your execution. You might see your execution start anywhere between the first 20 minutes after the hour boundary, for example, between 1:00 and 1:20. This is expected and done for load balancing on the backend.
Step 4: Generate data drift detection
For this step, I created Python code to invoke the model’s real-time prediction endpoint in a Jupyter notebook. To invoke the inference endpoint, run the cells in sections 1 and 5, or run all cells to recreate this demo. Update the notebook as needed.
By invoking the inference endpoint with anomalous data, SageMaker Model Monitor detects baseline constraint violations and displays the details in the monitoring job details report. For example, the inference endpoint is expecting a positive integer data type for each sample feature, whereas the anomalous dataset contains negative floating-point values for some sample features. In a live production environment, early “detection and correction” of such deviations may help mitigate potentially larger operational issues.
To view the monitoring job details report:
- In the SageMaker Studio Model Monitoring console, navigate to the Model job history tab.
- Double-click the monitoring job with the monitoring status of Issue found to open the Monitoring Job Details tab. The Monitoring status column indicates if an issue has been found. Issue found indicates that the monitor successfully detected one or more data quality constraint violations from the data drift dataset.
From step 4.1, the following screenshot shows the monitoring job history.
From step 4.2, the following screenshot shows the constraint violations from the data quality model monitoring job.
Step 5: Visualize the model monitor results
When working with large datasets with many features, it can helpful to graphically visualize trends in data. SageMaker provides a pre-built notebook for viewing feature statistics or constraint violations.
- To graphically visualize the distribution and the distribution statistics for all features, SageMaker includes a pre-built notebook for viewing feature statistics. To visualize feature statistic distribution, do the following:
- In the SageMaker console Monitoring Job Details tab, copy the full value for the Processing Job ARN to your clipboard.
- Select the View Amazon SageMaker notebook link.
- In the upper right, select Import Notebook.
- Select the Python 3 (Data Science) kernel, and then choose the Select button. It may take a few minutes for the Kernel to start.
- In the Jupyter notebook, update the code cell that contains the variable processing_job_arn with the value from the Processing Job ARN from step 5.1.1.
- To review the differences between the execution and baseline details from the model monitoring processing job, select Run from the top toolbar, then select the Run All Cells option.
From step 5.1.6, the following screenshot shows common descriptive statistics, including mean, sum, and standard deviation, from the inference dataset analyzed in the model monitoring processing job.
From step 5.1.6, the following screenshot shows inference feature statistics plotted with baseline feature statistics to graphically identify deviations in collected data.
Next steps
To gain additional qualitative and qualitative insights into machine learning models, check out other SageMaker Model Monitoring capabilities such as Model quality monitoring, Model explainability monitoring, and Bias drift monitoring.
Here are some additional resources I recommend checking out:
- Monitoring in-production ML models at large scale using Amazon SageMaker Model Monitor
- Detecting and analyzing incorrect model predictions with Amazon SageMaker Model Monitor and Debugger
- Amazon SageMaker Clarify
Conclusion
In this post, I demonstrated how to subscribe to a pre-trained third-party machine learning model from AWS Marketplace and configure a Data Quality monitoring schedule using Amazon SageMaker Model Monitor.
Using SageMaker Model Monitoring to monitor third party models from the AWS Machine Learning Marketplace provides you with a streamlined way to enable continuous monitoring for data drift detection.
About the Author
Bill Screen is a Senior Solutions Architect for the US State, Local Government, and Education team at Amazon Web Services. He’s passionate about helping customers achieve their business objectives with AI/ML solutions.