What does this AWS Solutions Implementation do?

The AWS MLOps Framework solution helps you streamline and enforce architecture best practices for machine learning (ML) model productionization. This solution is an extendable framework that provides a standard interface for managing ML pipelines for AWS ML services and third-party services. The solution’s template allows customers to upload their trained models (also referred to as bring your own model), configure the orchestration of the pipeline, and monitor the pipeline's operations. This solution increases your team’s agility and efficiency by allowing them to repeat successful processes at scale.

Leverage a pre-configured machine learning pipeline

Use the solution's reference architecture to initiate a pre-configured pipeline through an API call or a Git repository.

Automatically deploy a trained model and inference endpoint

Use the solution's framework to automate the model monitor pipeline or the Amazon SageMaker BYOM pipeline. Deliver an inference endpoint with model drift detection packaged as a serverless microservice.

AWS Solutions Implementation overview

Deploying this solution with the default parameters builds the following environment in the AWS Cloud.  

AWS MLOps Framework | Reference Architecture Diagram
 Click to enlarge

AWS MLOps Framework reference architecture

The AWS CloudFormation template deploys a Pipeline Provisioning framework that provisions a machine learning pipeline (BYOM/Bring your own model for Amazon SageMaker). The template includes the AWS Lambda functions and AWS Identity and Access Management (IAM) roles necessary to set up your account and create an Amazon Simple Storage Service (Amazon S3) bucket that contains the CloudFormation templates that set up the pipelines. It accepts an existing S3 bucket name, or creates a new bucket to use as the assets bucket. The template also creates an Amazon API Gateway instance, an additional Lambda function, and AWS CodePipeline instances.

The solution provides two pipeline options: Bring your own model (BYOM) and model monitor.

The BYOM pipeline includes four stages: source, build, deploy, and share.

  • Source—AWS CodePipeline connects to an S3 bucket containing the provisioned model artifacts and a Dockerfile.
  • Build—The pipeline uses the provisioned model artifacts (for example, a Python .pkl file) and the Dockerfile, to create a Docker image that is uploaded and stored in Amazon Elastic Container Registry (Amazon ECR). This stage is only inititated when you specify that the model is a custom algorithm.
  • Deploy—An AWS Lambda function uses the Docker image to create a model and an endpoint on Amazon SageMaker.
  • Share—An AWS Lambda function connects the Amazon SageMaker endpoint to Amazon API Gateway.
 
The model monitor pipeline includes two stages: source and deploy:
  • Source—AWS CodePipeline connects to an assets S3 bucket containing the training data (csv file with header) used to train the deployed ML model that will be monitored.
  • Deploy
    • Create baseline job—Creates a data baseline processing job using the training data. The output of the processing job is stored in the assets S3 bucket.
    • Create monitoring schedule—Creates a monitoring schedule job to monitor a deployed ML model on an Amazon SageMaker endpoint.

AWS MLOps Framework

Version 1.1.0
Release date: 01/2021
Author: AWS

Estimated deployment time: 3 min

Use the button below to subscribe to solution updates.

Note: To subscribe to RSS updates, you must have an RSS plug-in enabled for the browser you are using.  

Did this Solutions Implementation help you?
Provide feedback 
Video
Solving with Solutions: AWS ML Ops Framework
Build icon
Deploy a Solution yourself

Browse our library of AWS Solutions Implementations to get answers to common architectural problems.

Learn more 
Find an APN partner
Find an APN Partner

Find AWS certified consulting and technology partners to help you get started.

Learn more 
Explore icon
Explore Solutions Consulting Offers

Browse our portfolio of Consulting Offers to get AWS-vetted help with solution deployment.

Learn more