Overview
![](https://d1.awsstatic.com/colorset-3A_blue-to-green_gradient_divider.81459b38a56091aebc8c9b5310826c4ef397b007.png)
Machine Learning Operations (MLOps) Workload Orchestrator streamlines ML model deployment and enforces best practices for scalability, reliability, and efficiency. This AWS Solution is an extendable framework with a standard interface for managing ML pipelines across AWS ML and third-party services.
This solution includes an AWS CloudFormation template. This template enables model training, uploading of pre-trained models (also known as bring your own model or BYOM), pipeline orchestration configuration, and pipeline operation monitoring. By implementing this solution, your team can increase their agility and efficiency, repeating successful processes at scale.
Benefits
![](https://d1.awsstatic.com/colorset-3A_blue-to-green_gradient_divider.81459b38a56091aebc8c9b5310826c4ef397b007.png)
Initiate a pre-configured pipeline through an API call or an Amazon S3 bucket.
Automate model monitoring with Amazon SageMaker BYOM and deliver a serverless inference endpoint with drift detection.
Use the Amazon SageMaker Model Dashboard to view, search, and explore all of your Amazon SageMaker resources, including models, endpoints, model cards, and batch transform jobs.
Technical details
![](https://d1.awsstatic.com/colorset-3A_blue-to-green_gradient_divider.81459b38a56091aebc8c9b5310826c4ef397b007.png)
You can automatically deploy this architecture using the implementation guide and the accompanying AWS CloudFormation template. To support multiple use cases and business needs, the solution provides two AWS CloudFormation templates:
- Use the single-account template to deploy all of the solution’s pipelines in the same AWS account. This option is suitable for experimentation, development, and/or small-scale production workloads.
- Use the multi-account template to provision multiple environments (for example, development, staging, and production) across different AWS accounts, which improves governance and increases security and control of the ML pipeline’s deployment, provides safe experimentation and faster innovation, and keeps production data and workloads secure and available to help ensure business continuity.
-
Option 1 - Single-account deployment
-
Option 2 - Multi-account deployment
-
Option 1 - Single-account deployment
-
Step 1
The Orchestrator, which could be a DevOps engineer or another type of user, launches this solution in their AWS account and selects their preferred options. For example, they can use the Amazon SageMaker model registry or an existing Amazon Simple Storage Service (Amazon S3) bucket.
-
Option 2 - Multi-account deployment
Related content
![](https://d1.awsstatic.com/colorset-3A_blue-to-green_gradient_divider.81459b38a56091aebc8c9b5310826c4ef397b007.png)
In collaboration with the AWS Partner Solutions Architect and AWS Solutions Library teams, Cognizant built its MLOps Model Lifecycle Orchestrator solution on top of the MLOps Workload Orchestrator solution.
Total results: 3
- Headline
-
Digital Natives & Startups
-
Artificial Intelligence
-
Analytics
Total results: 1
- Publish Date
-
- Version: 2.2.3
- Released: 12/2024
- Author: AWS
- Est. deployment time: 3 mins
- Estimated cost: See details