AWS Marketplace

Monitoring data quality in third-party models with Amazon SageMaker Model Monitor

Building, training, and deploying machine learning models from scratch can be a time-consuming and costly endeavor for some customers. Moreover, once deployed to production, machine learning models need to be continuously monitored for deviations in model and data quality.

To help you expedite model deployment and implement a model monitoring solution, you can integrate pre-trained models that support CSV and flat JSON input. These models are available in AWS Marketplace with Amazon SageMaker Model Monitor.

AWS Marketplace offers hundreds of pre-trained models with a range of capabilities, such as object detection, buyer propensity, natural language processing, data extraction, and feature engineering.

Amazon SageMaker Model Monitor offers built-in analysis based on statistical rules to detect drifts in data and model quality. Data quality deviations monitored by SageMaker Model Monitor include anomalous data types and incomplete data such as data with a high percentage of null values. Data with too many or too few columns can also affect data quality.

In this blog post, I will demonstrate how to subscribe to a pre-trained third-party model from AWS Marketplace. I’ll also show how to configure a Data Quality monitoring schedule using Amazon SageMaker Model Monitor.

Solution Overview

For this demo, I deployed a buyer propensity model from AWS Marketplace. This model predicts the probability that a consumer is planning to purchase a home based on their gender, age range, household income range, and zip code. Here’s an overview of the steps I’ll walk through:

  1. Subscribe to the Propensity-Planning to Buy a House model in AWS Machine Learning Marketplace.
  2. Setup the model endpoint and endpoint configuration in the SageMaker console.
  3. Create a Data Quality monitoring schedule.
  4. Invoke the model’s inference endpoint with sample data via a Jupyter notebook.
  5. View the data quality monitoring job details and visualize feature distribution statistics.


  • An active AWS account
  • Access to run this demo in an Amazon SageMaker Studio notebook
  • Basic understanding of Machine Learning models
  • Familiarity with Jupyter notebooks and Python
  • Familiarity with the Amazon SageMaker console and SageMaker Studio


Step 1: Subscribe to the Propensity-Planning to Buy a House model in AWS Marketplace

  1. Open the AWS Marketplace listing Propensity-Planning to Buy a House(V 1.0) and choose Continue to Subscribe.
  2. On the Configure and launch page, select the SageMaker console launch method, and then choose View in Amazon SageMaker.

Step 2: Create a real-time inference endpoint

  1. For Step 1 on the Create endpoint page, enter a unique model name. I entered third-party-model. Be sure the selected IAM role has sufficient permissions, or let SageMaker create the role. Choose Next.
  2. For Step 2 on the Create endpoint page, update the following form fields:
    1. Enter a unique endpoint name: third-party-model-endpoint
    2. Next, create a New endpoint configuration.
    3. Update the Endpoint configuration name to: third-party-model-endpoint-config
    4. To capture prediction request and response information, toggle the Enable data capture radio button.
    5. Enter the S3 location to store data collected: s3://{bucket}/third-party-model/datacapture
    6. For the Sampling percentage (%), enter 100
    7. For the Capture content type, in the CSV/Text text area, enter text/csv
    8. Choose Create endpoint configuration. At the bottom of the page, choose Submit.

It may take a few minutes for the model’s endpoint status to change from Creating to InService

Amazon SageMaker Model Monitor currently provides the following types of monitoring:

For this blog post, I will focus on Data Quality monitoring.

Step 3: Enable Model Monitoring for Data Quality

After I subscribed to the model from AWS Marketplace and deployed the model’s real-time inference endpoint, l enabled Data Quality model monitoring for the model’s endpoint. To establish a monitoring baseline, I modified the sample dataset provided by the seller for this demo.

If you want to follow along, download the baseline training dataset and data drift dataset files and upload them to your S3 bucket.

When testing a model from AWS Marketplace, to ensure it meets your business needs, thoroughly evaluate the performance and model quality based on your ground-truth dataset. For this demo, I am using a curated dataset, which I modified from the model’s vendor. In production, you would use your own ground truth data. You can also use your ground-truth dataset for establishing a model monitoring baseline.

  1. When you complete Step 2 and the model’s endpoint status reads InService, open SageMaker Studio. In the left sidebar, select the Components and registries icon, and then select Endpoints from the drop-down menu.
  2. To open the Model Monitoring tab, open the context menu (right-click) the endpoint name and select Describe Endpoint. In my case, I chose third-party-model-endpoint.
  3. Update the respective S3 locations in the Data quality tab:
    1. S3 output location: This is where the data quality monitoring data is stored. I chose s3://{bucket}/third-party-model/reports.
    2. Baseline dataset S3 location: This is the source of the baseline training data. I chose s3://{bucket}/third-party-model/train/train.csv. Here’s the baseline training dataset used for this demo.
    3. Baseline S3 output location: This is the output of a baseline job. I chose s3://{bucket}/third-party-model/baselining.
    4. In the Advanced Settings, I updated the Schedule expression to Hourly and renamed the Schedule name by entering third-party-model-data-quality-schedule.
    5. Update Stopping condition (Seconds) from 86400 to 3600. This provides an hourly schedule.
    6. To complete the setup, choose Enable Model Monitoring.

Amazon SageMaker has a buffer period of 20 minutes to schedule your execution. You might see your execution start anywhere between the first 20 minutes after the hour boundary, for example, between 1:00 and 1:20. This is expected and done for load balancing on the backend.

Step 4: Generate data drift detection

For this step, I created Python code to invoke the model’s real-time prediction endpoint in a Jupyter notebook. To invoke the inference endpoint, run the cells in sections 1 and 5, or run all cells to recreate this demo. Update the notebook as needed.

By invoking the inference endpoint with anomalous data, SageMaker Model Monitor detects baseline constraint violations and displays the details in the monitoring job details report. For example, the inference endpoint is expecting a positive integer data type for each sample feature, whereas the anomalous dataset contains negative floating-point values for some sample features. In a live production environment, early “detection and correction” of such deviations may help mitigate potentially larger operational issues.

To view the monitoring job details report:

  1. In the SageMaker Studio Model Monitoring console, navigate to the Model job history tab.
  2. Double-click the monitoring job with the monitoring status of Issue found to open the Monitoring Job Details tab. The Monitoring status column indicates if an issue has been found. Issue found indicates that the monitor successfully detected one or more data quality constraint violations from the data drift dataset.

From step 4.1, the following screenshot shows the monitoring job history.

From step 4.2, the following screenshot shows the constraint violations from the data quality model monitoring job.

Step 5: Visualize the model monitor results

When working with large datasets with many features, it can helpful to graphically visualize trends in data. SageMaker provides a pre-built notebook for viewing feature statistics or constraint violations.

  1. To graphically visualize the distribution and the distribution statistics for all features, SageMaker includes a pre-built notebook for viewing feature statistics. To visualize feature statistic distribution, do the following:
    1. In the SageMaker console Monitoring Job Details tab, copy the full value for the Processing Job ARN to your clipboard.
    2. Select the View Amazon SageMaker notebook link.
    3. In the upper right, select Import Notebook.
    4. Select the Python 3 (Data Science) kernel, and then choose the Select button. It may take a few minutes for the Kernel to start.
    5. In the Jupyter notebook, update the code cell that contains the variable processing_job_arn with the value from the Processing Job ARN from step 5.1.1.
    6. To review the differences between the execution and baseline details from the model monitoring processing job, select Run from the top toolbar, then select the Run All Cells option.

From step 5.1.6, the following screenshot shows common descriptive statistics, including mean, sum, and standard deviation, from the inference dataset analyzed in the model monitoring processing job.

From step 5.1.6, the following screenshot shows inference feature statistics plotted with baseline feature statistics to graphically identify deviations in collected data.

Next steps

To gain additional qualitative and qualitative insights into machine learning models, check out other SageMaker Model Monitoring capabilities such as Model quality monitoring, Model explainability monitoring, and Bias drift monitoring.

Here are some additional resources I recommend checking out:

  1. Monitoring in-production ML models at large scale using Amazon SageMaker Model Monitor
  2. Detecting and analyzing incorrect model predictions with Amazon SageMaker Model Monitor and Debugger
  3. Amazon SageMaker Clarify


In this post, I demonstrated how to subscribe to a pre-trained third-party model from the AWS Marketplace and configure a Data Quality monitoring schedule using Amazon SageMaker Model Monitor.

Using SageMaker Model Monitoring to monitor third party models from the AWS Machine Learning Marketplace provides you with a streamlined way to enable continuous monitoring for data drift detection.

About the Author

Bill ScreenBill Screen is a Senior Solutions Architect for the US State, Local Government, and Education team at Amazon Web Services. He’s passionate about helping customers achieve their business objectives with AI/ML solutions.