What does this AWS Solutions Implementation do?

The AWS MLOps Framework solution helps you streamline and enforce architecture best practices for machine learning (ML) model productionization. This solution is an extendable framework that provides a standard interface for managing ML pipelines for AWS ML services and third-party services. The solution’s template allows customers to upload their trained models (also referred to as bring your own model), configure the orchestration of the pipeline, and monitor the pipeline's operations. This solution increases your team’s agility and efficiency by allowing them to repeat successful processes at scale.

Benefits

Leverage a pre-configured machine learning pipeline

Use the solution's reference architecture to initiate a pre-configured pipeline through an API call or a Git repository.

Benefits

Automatically deploy a trained model and inference endpoint

Use the solution's framework to automate the model monitor pipeline or the Amazon SageMaker BYOM pipeline. Deliver an inference endpoint with model drift detection packaged as a serverless microservice.

AWS Solutions Implementation overview

The diagrams below presents the serverless architectures you can automatically deploy using the solution's implementation guide and accompanying AWS CloudFormation template.

  • Option 1 - Single account deployment
  • Option 2 - Multi-account deployment
  • Option 1 - Single account deployment
  • AWS MLOps Framework | Reference Architecture Diagram
     Click to enlarge

    AWS MLOps Framework reference architecture (Single account deployment)

    Use the single account template to deploy all of the solution’s pipelines in the same AWS account. This option is suitable for experimentation, development, and/or small-scale production workloads.

    This solution’s single-account template provides the following components and workflows:

    1. The Orchestrator (solution owner or DevOps engineer) launches the solution in the AWS account and selects the desired options (for example, using Amazon SageMaker Registry, or providing an existing Amazon S3 bucket).
    2. The Orchestrator uploads the required assets for the target pipeline (for example, model artifact, training data, and/or custom algorithm zip file) into the Assets Amazon S3 bucket. If Amazon SageMaker Model Registry is used, the Orchestrator (or an automated pipeline) must register the model with the Model Registry.
    3. A single account AWS CodePipeline instance is provisioned by either sending an API call to the Amazon API Gateway, or by committing the mlops-config.json file to the Git repository. Depending on the pipeline type, the orchestrator AWS Lambda function packages the target AWS CloudFormation template and its parameters/configurations using the body of the API call or the mlops-config.json file, and uses it as the source stage for the AWS CodePipeline instance.
    4. The DeployPipeline stage takes the packaged CloudFormation template and its parameters/configurations and deploys the target pipeline into the same account.
    5. After the target pipeline is provisioned, users can access its functionalities. An Amazon Simple Notification Service (Amazon SNS) notification is sent to the email provided in the solution’s launch parameters.
    Use the button below to subscribe to solution updates.

    Note: To subscribe to RSS updates, you must have an RSS plug-in enabled for the browser you are using.  

    Did this Solutions Implementation help you?
    Provide feedback 
  • Option 2 - Multi-account deployment
  • AWS MLOps Framework | Reference Architecture Diagram
     Click to enlarge

    AWS MLOps Framework reference architecture (Multi-account deployment)

    Use the multi-account template to provision multiple environments (for example, development, staging, and production) across different AWS accounts, which improves governance and increases security and control of the ML pipeline’s deployment, provides safe experimentation and faster innovation, and keeps production data and workloads secure and available to ensure business continuity.

    This solution’s multi-account template provides the following components and workflows:

    1. The Orchestrator (solution owner or DevOps engineer with admin access to the orchestrator account) provides the AWS Organizations information (for example, development, staging, and production organizational unit IDs and account numbers). They also specify the desired options (for example, using Amazon SageMaker Registry, or providing an existing S3 bucket), and then launch the solution in their AWS account.
    2. The Orchestrator uploads the required assets for the target pipeline (for example, model artifact, training data, and/or custom algorithm zip file) into the Assets Amazon S3 bucket in the AWS Orchestrator account. If Amazon SageMaker Model Registry is used, the Orchestrator (or an automated pipeline) must register the model with the Model Registry.
    3. A multi-account AWS CodePipeline instance is provisioned by either sending an API call to the Amazon API Gateway, or by committing the mlops-config.json file to the Git repository. Depending on the pipeline type, the orchestrator AWS Lambda function packages the target AWS CloudFormation template and its parameters/configurations for each stage using the body of the API call or the mlops-config.json file, and uses it as the source stage for the AWS CodePipeline instance.
    4. The DeployDev stage takes the packaged CloudFormation template and its parameters/configurations and deploys the target pipeline into the development account.
    5. After the target pipeline is provisioned into the development account, the developer can then iterate on the pipeline.
    6. After the development is finished, the Orchestrator (or another authorized account) manually approves the DeployStaging action to move to the DeployStaging Stage.
    7. The DeployStaging stage deploys the target pipeline into the staging account, using the staging configuration.
    8. Testers perform different tests on the deployed pipeline.
    9. After the pipeline passes quality tests, the Orchestrator can approve the DeployProd action.
    10. The DeployProd stage deploys the target pipeline (with production configurations) into the production account.
    11. Finally, the target pipeline is live in production. An Amazon Simple Notification Service (Amazon SNS) notification is sent to the email provided in the solution’s launch parameters.

    Use the button below to subscribe to solution updates.

    Note: To subscribe to RSS updates, you must have an RSS plug-in enabled for the browser you are using.  

    Did this Solutions Implementation help you?
    Provide feedback 
Video
Solving with AWS Solutions: AWS ML Ops Framework
Back to top 
Build icon
Deploy a Solution yourself

Browse our library of AWS Solutions Implementations to get answers to common architectural problems.

Learn more 
Find an APN partner
Find an APN Partner

Find AWS certified consulting and technology partners to help you get started.

Learn more 
Explore icon
Explore Solutions Consulting Offers

Browse our portfolio of Consulting Offers to get AWS-vetted help with solution deployment.

Learn more