Posted On: Feb 5, 2021
AWS Solutions has updated the AWS MLOps Framework, an AWS Solutions Implementation that now provides a pipeline for model monitoring. In addition to the bring-your-own-model pipeline, the solution offers the ability to provision multiple model monitor pipelines to periodically monitor the quality of deployed ML models on Amazon SageMaker endpoints. This added functionality is key in ensuring model performance as customers can get notified when drift in model quality, bias, and feature importance occur in production.
The AWS MLOps Framework streamlines the pipeline deployment process and enforces architecture best practices for machine learning (ML) model productionization. This solution addresses common operational pain points that customers face when adopting multiple ML workflow automation tools.
The solution allows customers to upload their trained models, configure the orchestration of the pipeline, trigger the start of the deployment process, move models through different stages of deployment, and monitor the successes and failures of the operations. Customers can use batch and real-time data inferences to configure the pipeline for their business context.
This solution provides the following key features:
- Initiates a pre-configured pipeline through an API call or a Git repository
- Automatically deploys a trained model and provides an inference endpoint
- Continuously monitors deployed machine learning models and detects any deviation in their quality
- Supports running your own integration tests to ensure that the deployed model meets expectations
- Allows provisioning of multiple environments to support your ML model’s life cycle
Additional AWS Solutions are available on the AWS Solutions Implementation webpage, where customers can browse solutions by product category or industry to find AWS-vetted, automated, turnkey reference implementations that address specific business needs.