Posted On: Nov 30, 2022

Amazon SageMaker supports shadow testing to help you validate performance of new machine learning (ML) models by comparing them to production models. With shadow testing, you can spot potential configuration errors and performance issues before they impact end users. SageMaker eliminates weeks of time spent building infrastructure for shadow testing, so you can release models to production faster.

Testing model updates involves sending a copy of the inference requests received by the production model to the new model and tracking how it performs. However, it can take several weeks of your time to build your own testing infrastructure, mirror inference requests, and compare how models perform. Amazon SageMaker enables you to evaluate a new ML model by testing its performance against the current deployed production model. Simply select the production model you want to test against, and SageMaker automatically deploys the new model for inference. SageMaker then routes a copy of the inference requests received by the production model to the new model and creates a live dashboard that shows performance differences across key metrics including latency and error rate in real time. Once you have reviewed the performance metrics and validated the model performance, you can quickly deploy the model in production. 

To learn more, see Amazon SageMaker shadow testing web page. For pricing information, see Amazon SageMaker Pricing. SageMaker support for shadow testing is generally available in all AWS regions where SageMaker inference is available except China (Beijing, operated by Sinnet), and China (Ningxia, operated by NWCD) and AWS GovCloud (US) Regions.