We use essential cookies and similar tools that are necessary to provide our site and services. We use performance cookies to collect anonymous statistics, so we can understand how customers use our site and make improvements. Essential cookies cannot be deactivated, but you can choose “Customize” or “Decline” to decline performance cookies.
If you agree, AWS and approved third parties will also use cookies to provide useful site features, remember your preferences, and display relevant content, including relevant advertising. To accept or decline all non-essential cookies, choose “Accept” or “Decline.” To make more detailed choices, choose “Customize.”
Customize cookie preferences
We use cookies and similar tools (collectively, "cookies") for the following purposes.
Essential
Essential cookies are necessary to provide our site and services and cannot be deactivated. They are usually set in response to your actions on the site, such as setting your privacy preferences, signing in, or filling in forms.
Performance
Performance cookies provide anonymous statistics about how customers navigate our site so we can improve site experience and performance. Approved third parties may perform analytics on our behalf, but they cannot use the data for their own purposes.
Allowed
Functional
Functional cookies help us provide useful site features, remember your preferences, and display relevant content. Approved third parties may set these cookies to provide certain site features. If you do not allow these cookies, then some or all of these services may not function properly.
Allowed
Advertising
Advertising cookies may be set through our site by us or our advertising partners and help us deliver relevant marketing content. If you do not allow these cookies, you will experience less relevant advertising.
Allowed
Blocking some types of cookies may impact your experience of our sites. You may review and change your choices at any time by selecting Cookie preferences in the footer of this site. We and selected third-parties use cookies or similar technologies as specified in the AWS Cookie Notice.
Your privacy choices
We display ads relevant to your interests on AWS sites and on other properties, including cross-context behavioral advertising. Cross-context behavioral advertising uses data from one site or app to advertise to you on a different company’s site or app.
To not allow AWS cross-context behavioral advertising based on cookies or similar technologies, select “Don't allow” and “Save privacy choices” below, or visit an AWS site with a legally-recognized decline signal enabled, such as the Global Privacy Control. If you delete your cookies or visit this site from a different browser or device, you will need to make your selection again. For more information about cookies and how we use them, please read our AWS Cookie Notice.
Machine Learning Operations (MLOps) Workload Orchestrator streamlines ML model deployment and enforces best practices for scalability, reliability, and efficiency. This AWS Solution is an extendable framework with a standard interface for managing ML pipelines across AWS ML and third-party services.
This solution includes an AWS CloudFormation template. This template enables model training, uploading of pre-trained models (also known as bring your own model or BYOM), pipeline orchestration configuration, and pipeline operation monitoring. By implementing this solution, your team can increase their agility and efficiency, repeating successful processes at scale.
Benefits
Launch with a pre-configured ML pipeline
Initiate a pre-configured pipeline through an API call or an Amazon S3 bucket.
Automatically deploy a trained model and inference endpoint
Automate model monitoring with Amazon SageMaker BYOM and deliver a serverless inference endpoint with drift detection.
Centralized visibility of your ML resources
Use the Amazon SageMaker Model Dashboard to view, search, and explore all of your Amazon SageMaker resources, including models, endpoints, model cards, and batch transform jobs.
Technical details
You can automatically deploy this architecture using the implementation guide and the accompanying AWS CloudFormation template. To support multiple use cases and business needs, the solution provides two AWS CloudFormation templates:
Use the single-account template to deploy all of the solution’s pipelines in the same AWS account. This option is suitable for experimentation, development, and/or small-scale production workloads.
Use the multi-account template to provision multiple environments (for example, development, staging, and production) across different AWS accounts, which improves governance and increases security and control of the ML pipeline’s deployment, provides safe experimentation and faster innovation, and keeps production data and workloads secure and available to help ensure business continuity.
Option 1 - Single-account deployment
Option 2 - Multi-account deployment
Option 1 - Single-account deployment
Step 3b Depending on the pipeline type, the AWS Lambda Orchestrator function packages the target CloudFormation template and its parameters and configurations using the body of the API call or the mlops-config.json file. The Orchestrator then uses this packaged template and configurations as the source stage for the CodePipeline instance.
Step 4 The DeployPipeline stage takes the packaged CloudFormation template and its parameters or configurations and deploys the target pipeline into the same account.
Step 5 After the target pipeline is provisioned, users can access its functionalities. An Amazon Simple Notification Service (Amazon SNS) notification is sent to the email provided in the solution’s launch parameters.
Step 1 The Orchestrator, which could be a DevOps engineer or another type of user, launches this solution in their AWS account and selects their preferred options. For example, they can use the Amazon SageMakermodel registry or an existing Amazon Simple Storage Service (Amazon S3) bucket.
Step 2 The Orchestrator uploads the required assets, such as the model artifact, training data, or custom algorithm zip file, into the Amazon S3 assets bucket. If using the SageMaker Model Registry, the Orchestrator (or an automated pipeline) must register the model with the model registry.
Step 3a A single account AWS CodePipeline instance is provisioned by either sending an API call to Amazon API Gateway, or by uploading the mlops-config.json file to the configuration Amazon S3 bucket.
Step 3b Depending on the pipeline type, the AWS Lambda Orchestrator function packages the target CloudFormation template and its parameters and configurations using the body of the API call or the mlops-config.json file. The Orchestrator then uses this packaged template and configurations as the source stage for the CodePipeline instance.
Step 4 The DeployPipeline stage takes the packaged CloudFormation template and its parameters or configurations and deploys the target pipeline into the same account.
Step 5 After the target pipeline is provisioned, users can access its functionalities. An Amazon Simple Notification Service (Amazon SNS) notification is sent to the email provided in the solution’s launch parameters.
Step 1 The Orchestrator, which could be a DevOps engineer or another type of user, launches this solution in their AWS account and selects their preferred options. For example, they can use the Amazon SageMakermodel registry or an existing Amazon Simple Storage Service (Amazon S3) bucket.
Step 2 The Orchestrator uploads the required assets, such as the model artifact, training data, or custom algorithm zip file, into the Amazon S3 assets bucket. If using the SageMaker Model Registry, the Orchestrator (or an automated pipeline) must register the model with the model registry.
Step 3a A single account AWS CodePipeline instance is provisioned by either sending an API call to Amazon API Gateway, or by uploading the mlops-config.json file to the configuration Amazon S3 bucket.
Step 3b Depending on the pipeline type, the AWS Lambda Orchestrator function packages the target CloudFormation template and its parameters and configurations using the body of the API call or the mlops-config.json file. The Orchestrator then uses this packaged template and configurations as the source stage for the CodePipeline instance.
Step 4 The DeployPipeline stage takes the packaged CloudFormation template and its parameters or configurations and deploys the target pipeline into the same account.
Step 5 After the target pipeline is provisioned, users can access its functionalities. An Amazon Simple Notification Service (Amazon SNS) notification is sent to the email provided in the solution’s launch parameters.
Step 1 The Orchestrator, which could be a DevOps engineer or another type of user, launches this solution in their AWS account and selects their preferred options. For example, they can use the Amazon SageMakermodel registry or an existing Amazon Simple Storage Service (Amazon S3) bucket.
Step 2 The Orchestrator uploads the required assets, such as the model artifact, training data, or custom algorithm zip file, into the Amazon S3 assets bucket. If using the SageMaker Model Registry, the Orchestrator (or an automated pipeline) must register the model with the model registry.
Step 3a A single account AWS CodePipeline instance is provisioned by either sending an API call to Amazon API Gateway, or by uploading the mlops-config.json file to the configuration Amazon S3 bucket.
Step 3b Depending on the pipeline type, the AWS Lambda Orchestrator function packages the target CloudFormation template and its parameters and configurations using the body of the API call or the mlops-config.json file. The Orchestrator then uses this packaged template and configurations as the source stage for the CodePipeline instance.
Step 4 The DeployPipeline stage takes the packaged CloudFormation template and its parameters or configurations and deploys the target pipeline into the same account.
Step 5 After the target pipeline is provisioned, users can access its functionalities. An Amazon Simple Notification Service (Amazon SNS) notification is sent to the email provided in the solution’s launch parameters.
Step 1 The Orchestrator, which could be a DevOps engineer or another type of user, launches this solution in their AWS account and selects their preferred options. For example, they can use the Amazon SageMakermodel registry or an existing Amazon Simple Storage Service (Amazon S3) bucket.
Step 2 The Orchestrator uploads the required assets, such as the model artifact, training data, or custom algorithm zip file, into the Amazon S3 assets bucket. If using the SageMaker Model Registry, the Orchestrator (or an automated pipeline) must register the model with the model registry.
Step 3a A single account AWS CodePipeline instance is provisioned by either sending an API call to Amazon API Gateway, or by uploading the mlops-config.json file to the configuration Amazon S3 bucket.
Option 2 - Multi-account deployment
Step 5 After the target pipeline is provisioned into the development account, the developer can then iterate on the pipeline.
Step 6 After the development is finished, the Orchestrator (or another authorized account) manually approves the DeployStaging action to move to next stage, DeployStaging.
Step 7 The DeployStaging stage deploys the target pipeline into the staging account, using the staging configuration.
Step 8 Testers perform different tests on the deployed pipeline.
Step 9 After the pipeline passes quality tests, the Orchestrator can approve the DeployProd action.
Step 10 The DeployProd stage deploys the target pipeline (with production configurations) into the production account.
Step 11 The target pipeline is live in production. An Amazon SNS notification is sent to the email provided in the solution’s launch parameters.
Step 1 The Orchestrator, which could be a DevOps engineer or another user with admin access to the orchestrator account, provides the AWS Organizations information, such as the development, staging, and production organizational unit IDs and account numbers.
They also specify the desired options, which may include using the SageMaker Model Registry or providing an existing Amazon S3 bucket, and then launch the solution in their AWS account.
Step 2 The Orchestrator uploads the required assets for the target pipeline, such as the model artifact, training data, and/or custom algorithm zip file, into the Amazon S3 assets bucket in the Orchestrator’s AWS account. If the SageMaker Model Registry is used, the Orchestrator (or an automated pipeline) must register the model with the model registry.
Step 3a A multi-account CodePipeline instance is provisioned by either sending an API call to API Gateway, or by uploading the mlops-config.json file to the configuration Amazon S3 bucket.
Step 3b Depending on the pipeline type, the Lambda Orchestrator function packages the target CloudFormation template and its parameter and configurations using the body of the API call or the mlops-config.json file. The Orchestrator then uses this packaged template and configurations as the source stage for the CodePipeline instance.
Step 4 The DeployDev stage takes the packaged CloudFormation template and its parameters or configurations and deploys the target pipeline into the development account.
Step 5 After the target pipeline is provisioned into the development account, the developer can then iterate on the pipeline.
Step 6 After the development is finished, the Orchestrator (or another authorized account) manually approves the DeployStaging action to move to next stage, DeployStaging.
Step 7 The DeployStaging stage deploys the target pipeline into the staging account, using the staging configuration.
Step 8 Testers perform different tests on the deployed pipeline.
Step 9 After the pipeline passes quality tests, the Orchestrator can approve the DeployProd action.
Step 10 The DeployProd stage deploys the target pipeline (with production configurations) into the production account.
Step 11 The target pipeline is live in production. An Amazon SNS notification is sent to the email provided in the solution’s launch parameters.
Step 1 The Orchestrator, which could be a DevOps engineer or another user with admin access to the orchestrator account, provides the AWS Organizations information, such as the development, staging, and production organizational unit IDs and account numbers.
They also specify the desired options, which may include using the SageMaker Model Registry or providing an existing Amazon S3 bucket, and then launch the solution in their AWS account.
Step 2 The Orchestrator uploads the required assets for the target pipeline, such as the model artifact, training data, and/or custom algorithm zip file, into the Amazon S3 assets bucket in the Orchestrator’s AWS account. If the SageMaker Model Registry is used, the Orchestrator (or an automated pipeline) must register the model with the model registry.
Step 3a A multi-account CodePipeline instance is provisioned by either sending an API call to API Gateway, or by uploading the mlops-config.json file to the configuration Amazon S3 bucket.
Step 3b Depending on the pipeline type, the Lambda Orchestrator function packages the target CloudFormation template and its parameter and configurations using the body of the API call or the mlops-config.json file. The Orchestrator then uses this packaged template and configurations as the source stage for the CodePipeline instance.
Step 4 The DeployDev stage takes the packaged CloudFormation template and its parameters or configurations and deploys the target pipeline into the development account.
Step 5 After the target pipeline is provisioned into the development account, the developer can then iterate on the pipeline.
Step 5 After the target pipeline is provisioned into the development account, the developer can then iterate on the pipeline.
Step 6 After the development is finished, the Orchestrator (or another authorized account) manually approves the DeployStaging action to move to next stage, DeployStaging.
Step 7 The DeployStaging stage deploys the target pipeline into the staging account, using the staging configuration.
Step 8 Testers perform different tests on the deployed pipeline.
Step 9 After the pipeline passes quality tests, the Orchestrator can approve the DeployProd action.
Step 10 The DeployProd stage deploys the target pipeline (with production configurations) into the production account.
Step 11 The target pipeline is live in production. An Amazon SNS notification is sent to the email provided in the solution’s launch parameters.
Step 1 The Orchestrator, which could be a DevOps engineer or another user with admin access to the orchestrator account, provides the AWS Organizations information, such as the development, staging, and production organizational unit IDs and account numbers.
They also specify the desired options, which may include using the SageMaker Model Registry or providing an existing Amazon S3 bucket, and then launch the solution in their AWS account.
Step 2 The Orchestrator uploads the required assets for the target pipeline, such as the model artifact, training data, and/or custom algorithm zip file, into the Amazon S3 assets bucket in the Orchestrator’s AWS account. If the SageMaker Model Registry is used, the Orchestrator (or an automated pipeline) must register the model with the model registry.
Step 3a A multi-account CodePipeline instance is provisioned by either sending an API call to API Gateway, or by uploading the mlops-config.json file to the configuration Amazon S3 bucket.
Step 3b Depending on the pipeline type, the Lambda Orchestrator function packages the target CloudFormation template and its parameter and configurations using the body of the API call or the mlops-config.json file. The Orchestrator then uses this packaged template and configurations as the source stage for the CodePipeline instance.
Step 4 The DeployDev stage takes the packaged CloudFormation template and its parameters or configurations and deploys the target pipeline into the development account.
Step 5 After the target pipeline is provisioned into the development account, the developer can then iterate on the pipeline.
Step 6 After the development is finished, the Orchestrator (or another authorized account) manually approves the DeployStaging action to move to next stage, DeployStaging.
Step 7 The DeployStaging stage deploys the target pipeline into the staging account, using the staging configuration.
Step 8 Testers perform different tests on the deployed pipeline.
Step 9 After the pipeline passes quality tests, the Orchestrator can approve the DeployProd action.
Step 10 The DeployProd stage deploys the target pipeline (with production configurations) into the production account.
Step 11 The target pipeline is live in production. An Amazon SNS notification is sent to the email provided in the solution’s launch parameters.
Step 1 The Orchestrator, which could be a DevOps engineer or another user with admin access to the orchestrator account, provides the AWS Organizations information, such as the development, staging, and production organizational unit IDs and account numbers.
They also specify the desired options, which may include using the SageMaker Model Registry or providing an existing Amazon S3 bucket, and then launch the solution in their AWS account.
Step 2 The Orchestrator uploads the required assets for the target pipeline, such as the model artifact, training data, and/or custom algorithm zip file, into the Amazon S3 assets bucket in the Orchestrator’s AWS account. If the SageMaker Model Registry is used, the Orchestrator (or an automated pipeline) must register the model with the model registry.
Step 3a A multi-account CodePipeline instance is provisioned by either sending an API call to API Gateway, or by uploading the mlops-config.json file to the configuration Amazon S3 bucket.
Step 3b Depending on the pipeline type, the Lambda Orchestrator function packages the target CloudFormation template and its parameter and configurations using the body of the API call or the mlops-config.json file. The Orchestrator then uses this packaged template and configurations as the source stage for the CodePipeline instance.
Step 4 The DeployDev stage takes the packaged CloudFormation template and its parameters or configurations and deploys the target pipeline into the development account.
Step 5 After the target pipeline is provisioned into the development account, the developer can then iterate on the pipeline.
Related content
Case Study
Cognizant MLOps Model Lifecycle Orchestrator Speeds Deployment of Machine Learning Models from Weeks to Hours Using AWS Solutions
In collaboration with the AWS Partner Solutions Architect and AWS Solutions Library teams, Cognizant built its MLOps Model Lifecycle Orchestrator solution on top of the MLOps Workload Orchestrator solution.