Skip to main content

AWS Solutions Library

Guidance for Optimizing MLOps for Sustainability on AWS

Overview

This Guidance provides best practices to help you optimize machine learning (ML) operations (MLOps) for environmental sustainability. While customers across industries are committed to reducing their carbon footprints, ML workloads are becoming increasingly complex and consuming more energy and resources. This Guidance helps you review and refine your workloads to maximize utilization and minimize waste and the total resources deployed and powered to support your workload at all aspects of the ML lifecycle, including data collection, data storage, feature engineering, training, inference, and deployment.

How it works

Data Preparation

These technical details feature an architecture diagram to illustrate how to effectively use this solution. The architecture diagram shows the key components and their interactions, providing an overview of the architecture's structure and functionality step-by-step.
Architecture diagram illustrating the AWS MLOps (Machine Learning Operations) data preparation process for sustainability. It features components such as Amazon SageMaker Pipeline, AWS Tranium, AWS Inferentia, Amazon CloudWatch, CodePipeline, and SageMaker services for data preparation, model training, deployment, and model management workflows.

Model Training and Tuning

These technical details feature an architecture diagram to illustrate how to effectively use this solution. The architecture diagram shows the key components and their interactions, providing an overview of the architecture's structure and functionality step-by-step.
Architecture diagram illustrating an AWS MLOps pipeline for sustainable model training and tuning using services such as Amazon SageMaker, AWS Tranium, Amazon CloudWatch, AWS CodePipeline, and S3. The diagram covers data preparation, feature creation, experiment tracking, model training, evaluation, deployment, and management with automated scaling and monitoring capabilities.

Model Deployment and Management

These technical details feature an architecture diagram to illustrate how to effectively use this solution. The architecture diagram shows the key components and their interactions, providing an overview of the architecture's structure and functionality step-by-step.
Architecture diagram illustrating the AWS MLOps workflow for sustainability, covering data preparation, model training and tuning, and model deployment and management using Amazon SageMaker, AWS Inferentia, AWS Trnainium, CodePipeline, and related services.

Well-Architected Pillars

The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.

CloudWatch metrics and alarms monitor the health of model endpoints deployed on SageMaker hosting options, allowing you to record performance-related metrics, analyze metrics when events or incidents occur, establish KPIs to measure workload performance, and monitor and alarm proactively. Additionally, collecting and analyzing metrics for training jobs and inference environments using CloudWatch allows you to analyze workload health trends and conduct periodic workload metric reviews with your organization.

Read the Operational Excellence whitepaper 

AWS Identity and Access Management (IAM) controls access to resources and managed services to help ensure least privilege access, secure the ML environment, and protect against adversarial and malicious activities. Data is encrypted at rest on Amazon Simple Storage Service (Amazon S3) and SageMaker Feature Store , both of which use AWS Key Management Service (AWS KMS) to protect sensitive data.

Read the Security whitepaper 

SageMaker allows automatic scaling of the model endpoint for reliable processing of predictions and to meet changing workload demands. It also distributes instances across Availability Zones in case an outage occurs or an instance fails. SageMaker Pipelines allows for versioned pipeline inputs and artifacts, and SageMaker Projects allows for versioned data processing code. This versioning helps you create a repeatable approach and retain data in case you need to roll back to a previous state.

Read the Reliability whitepaper 

We selected the services in this Guidance to improve performance without compromising on accuracy of training results. For example, managed ML services, such as SageMaker , deliver better performance through pre-optimized ML components. SageMaker Inference Recommender increases performance while reducing inference time. High-compute instances, such as Trainium and Inferentia , can accelerate inference speed.

Read the Performance Efficiency whitepaper 

SageMaker services have built-in features that help you optimize costs related to model training. For example, SageMaker Feature Store helps avoid the cost of storing and processing duplicated datasets. SageMaker Debugger allows you to stop a training job as soon as a bug is detected, saving costs associated with unnecessary training job executions. SageMaker Training Compiler reduces training time and costs on GPU instances. Serverless pipelines, SageMaker Asynchronous Endpoints , and SageMaker Batch Transform avoid the cost of maintaining compute infrastructure at all hours of the day.

Read the Cost Optimization whitepaper 

SageMaker Serverless Inference Endpoints and SageMaker Asynchronous Endpoints use autoscaling groups to scale resources in response to demand. SageMaker Serverless Inference Endpoints scale endpoints down to zero when there are no requests. This minimizes unnecessary provisioned resources and reduces carbon emissions. Additionally, serverless technologies, such as SageMaker Serverless Inference Endpoints and SageMaker Pipeline help eliminate idle resources by not having to spin up servers.

Read the Sustainability whitepaper 

Disclaimer

The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.

Did you find what you were looking for today?

Let us know so we can improve the quality of the content on our pages