MLOps Workload Orchestrator

Deploy a robust pipeline that uses managed automation tools and ML services to simplify ML model development and production

Overview

The MLOps Workload Orchestrator solution helps you streamline and enforce architecture best practices for machine learning (ML) model productionization. This solution is an extendable framework that provides a standard interface for managing ML pipelines for AWS ML services and third-party services. The solution’s template allows customers to:

  • Train models
  • Upload their trained models (also referred to as bring your own model [BYOM])
  • Configure the pipeline orchestration
  • Monitor the pipeline's operations

This solution increases your team’s agility and efficiency by allowing them to repeat successful processes at scale.

 

Benefits

Leverage a pre-configured machine learning pipeline
Use the solution's reference architecture to initiate a pre-configured pipeline through an API call or a Git repository.
Automatically deploy a trained model and inference endpoint
Use the solution's framework to automate the model monitor pipeline or the Amazon SageMaker BYOM pipeline. Deliver an inference endpoint with model drift detection packaged as a serverless microservice.
View your resources in a dashboard

Use the Amazon SageMaker Model Dashboard to view your solution-created Amazon SageMaker resources (such as models, endpoints, model cards, and batch transform jobs).

Technical details

To support multiple use cases and business needs, the solution provides two AWS CloudFormation templates:

  1. Use the single-account template to deploy all of the solution’s pipelines in the same AWS account. This option is suitable for experimentation, development, and/or small-scale production workloads.
  2. Use the multi-account template to provision multiple environments (for example, development, staging, and production) across different AWS accounts, which improves governance and increases security and control of the ML pipeline’s deployment, provides safe experimentation and faster innovation, and keeps production data and workloads secure and available to ensure business continuity.
  • Option 1 - Single-account deployment
  • Option 2 - Multi-account deployment
Case Study
Cognizant MLOps Model Lifecycle Orchestrator Speeds Deployment of Machine Learning Models from Weeks to Hours Using AWS Solutions

In collaboration with the AWS Partner Solutions Architect and AWS Solutions Library teams, Cognizant built its MLOps Model Lifecycle Orchestrator solution on top of the MLOps Workload Orchestrator solution.

Read the case study 
About this deployment
Version
2.1.2
Released
04/2023
Author
AWS
Est. deployment time
3 mins
Estimated cost
Source code  CloudFormation templates  Subscribe to RSS feed 
Deployment options
Ready to get started?
Deploy this solution by launching it in your AWS Console

Need help? Deploy with a partner.
Find an AWS Certified third-party expert to assist with this deployment
Did this AWS Solution help you?
Provide feedback