What does this AWS Solutions Implementation do?

The AWS MLOps Framework solution helps you streamline and enforce architecture best practices for machine learning (ML) model productionization. This solution is an extendable framework that provides a standard interface for managing ML pipelines for AWS ML services and third-party services. The solution’s template allows customers to upload their trained models (also referred to as bring your own model), configure the orchestration of the pipeline, and monitor the pipeline's operations. This solution increases your team’s agility and efficiency by allowing them to repeat successful processes at scale.

This solution provides the following key features:

  • Initiates a pre-configured pipeline through an API call or a Git repository
  • Automatically deploys a trained model and provides an inference endpoint
  • Supports running your own integration tests to ensure that the deployed model meets expectations
  • Allows you to provision multiple environments to support your ML model’s life cycle
  • Notifies users of the pipeline outcome via email

AWS Solutions Implementation overview

Deploying this solution with the default parameters builds the following environment in the AWS Cloud.  

AWS MLOps Framework | Reference Architecture Diagram
 Click to enlarge

AWS MLOps Framework reference architecture

The AWS CloudFormation template deploys a Pipeline Provisioning framework that provisions a machine learning pipeline (Bring Your Own Model for SageMaker). The template includes the AWS Lambda functions and AWS Identity and Access Management (IAM) roles necessary to set up your account, and it creates an Amazon Simple Storage Service (Amazon S3) bucket that contains the CloudFormation templates that set up the pipelines. The template also creates an Amazon API Gateway instance, an additional Lambda function, and an AWS CodePipeline instance and an AWS CodeBuild instance.

Each provisioned pipeline includes four stages: source, build, deploy, and share.

  • Source—AWS CodePipeline connects to an S3 bucket containing the provisioned model artifacts and a Dockerfile.
  • Build—The pipeline uses the provisioned model artifacts (for example, a Python .pkl file) and the Dockerfile, to create a Docker image that is uploaded and stored in Amazon Elastic Container Registry (Amazon ECR).
  • Deploy—An AWS Lambda function uses the Docker image to create a model and an endpoint on Amazon SageMaker.
  • Share—An AWS Lambda function connects the Amazon SageMaker endpoint to Amazon API Gateway.

This solution’s architecture provides the following components and workflows:

  1. Orchestrator role (solution owner or DevOps engineer) launches the solution in their AWS account, then calls the API created in API Gateway to either provision the pipeline or commit the mlops-config.json file to the Git repository. 

  2. After provisioning, if your model is a custom algorithm not listed as an Amazon SageMaker built-in algorithm, the orchestrator uploads a zip file containing a Dockerfile and the necessary files for building an Amazon SageMaker compatible Docker image.
  3. The orchestrator uploads the model artifact into the Assets bucket S3 bucket. The upload automatically initiates the pipeline.
 If the model is a custom algorithm, AWS CodeBuild uses the model artifact and the Dockerfile to build a Docker container image that is then stored in Amazon ECR. 
AWS Lambda function creates a model, an endpoint configuration, and an endpoint using the stored Docker image from Amazon ECR or a built-in image from Amazon SageMaker. Then, AWS CodePipeline initiates the share stage. 
In the share stage, an AWS Lambda function connects the created inference endpoint in SageMaker to API Gateway. This enables users to use their deployed model through the API created for inference. 
An Amazon SNS notification is sent to the email that was provided when launching the solution.

  4. Users can test and use their deployed model using the API Gateway connected to their model in SageMaker.

AWS MLOps Framework

Version 1.0.0
Last updated: 10/2020
Author: AWS

Estimated deployment time: 3 min

Use the button below to subscribe to solution updates.

Note: To subscribe to RSS updates, you must have an RSS plug-in enabled for the browser you are using.  

Did this Solutions Implementation help you?
Provide feedback 


Leverage a pre-configured machine learning pipeline

Use the solution's reference architecture to initiate a pre-configured pipeline through an API call or a Git repository.

Automatically deploy a trained model and inference endpoint

Use the solution's framework to automate the Amazon SageMaker BYOM workflow. Delivers an inference endpoint with model drift detection packaged as a serverless microservice.
Build icon
Deploy a Solution yourself

Browse our library of AWS Solutions Implementations to get answers to common architectural problems.

Learn more 
Find an APN partner
Find an APN Partner

Find AWS certified consulting and technology partners to help you get started.

Learn more 
Explore icon
Explore Solutions Consulting Offers

Browse our portfolio of Consulting Offers to get AWS-vetted help with solution deployment.

Learn more