[SEO Subhead]
This Guidance provides best practices to help you optimize machine learning (ML) operations (MLOps) for environmental sustainability. While customers across industries are committed to reducing their carbon footprints, ML workloads are becoming increasingly complex and consuming more energy and resources. This Guidance helps you review and refine your workloads to maximize utilization and minimize waste and the total resources deployed and powered to support your workload at all aspects of the ML lifecycle, including data collection, data storage, feature engineering, training, inference, and deployment.
Please note: [Disclaimer]
Architecture Diagram
-
Data Preparation
-
Model Training and Tuning
-
Model Deployment and Management
-
Data Preparation
-
This architecture diagram focuses on data preparation. For more details about other aspects of the ML lifecycle, open the other tabs.
Step 1
Choose Region based on both business requirements and sustainability goals. When regulations and legal aspects allow, use one of the AWS Regions where the electricity consumed is attributable to 100% renewable energy or Regions where the grid has a published carbon intensity that is lower than other locations (or Regions). When selecting a Region, aim to minimize data movement across networks—store your data close to your producers and train your models close to your data.
Step 2
Adopt a serverless architecture for your pipeline so it only provisions resources when work needs to be done. Use Amazon SageMaker Pipeline to avoid maintaining compute infrastructure at all times. You can extend a template provided by Amazon SageMaker Projects, such as MLOps template for model building, training, deployment and Amazon SageMaker Model Monitor.Step 3
Reduce duplication and re-run of feature engineering code across teams and projects by using Amazon SageMaker Feature Store.Step 4
Reduce the volume of data to be stored and adopt sustainable storage options to limit the carbon impact of your workload. Use energy-efficient, archival-class storage for infrequently accessed data, such as your raw data. If you can easily re-create an infrequently accessed dataset, like training, validation and test data, use the Amazon Simple Storage Service (Amazon S3) One Zone-Infrequent Access class to minimize the total data stored.Manage the lifecycle of all your data and automatically enforce deletion timelines to minimize the total storage requirements of your workload using Amazon S3 Lifecycle policies. Amazon S3 Intelligent-Tiering will automatically move your data to the most energy-efficient access tier when access patterns change. Define data retention periods that support your sustainability goals while meeting your business requirements, not exceeding them.
-
Model Training and Tuning
-
This architecture diagram focuses on model training and tuning. For more details about other aspects of the ML lifecycle, open the other tabs.
Step 5
For distributed training of large deep learning models, use Amazon SageMaker Model Parallelism Library in your training code to maximize usage of graphics processing units (GPUs).Step 6
Use Amazon SageMaker Training Compiler to compile your deep learning models from their high-level language representation to hardware-optimized instructions to reduce training time. This can speed up deep learning model training by up to 50%.Step 7
Use Bayesian optimization search rather than random or grid search. Bayesian search typically requires 10 times fewer jobs than random search to find the best hyperparameters.Step 8
Use Amazon SageMaker Debugger to detect under-utilization of system resources and identify training problems. SageMaker Debugger built-in rules can monitor your training jobs and automatically stop them upon bug detection.Step 9
Define acceptable performance criteria: evaluate the accuracy of your models using Amazon SageMaker Processing Jobs and make trade-offs between your model’s accuracy and its carbon footprint. Establish performance criteria that support your sustainability goals while meeting your business requirements, not exceeding them.Step 10
Use AWS Trainium to train deep learning models at up to 52% less energy than comparable Amazon Elastic Compute Cloud (Amazon EC2) instances. Consider Managed Spot Training, which takes advantage of unused Amazon EC2 capacity, to improve your overall resource efficiency and reduce idle capacity of cloud resources.Step 11
Right-size your training jobs with Amazon CloudWatch metrics.Step 12
Reduce the volume of CloudWatch logs you keep. By setting limited retention time for your notebooks and training logs, you’ll avoid unnecessary log storage.Step 13
Document your model’s environmental impact using Amazon SageMaker Model Cards. -
Model Deployment and Management
-
This architecture diagram focuses on model deployment and management. For more details about other aspects of the ML lifecycle, open the other tabs.
Step 14
Automate the deployment of your models. Use Amazon SageMaker Model Registry and AWS CodePipeline to run your deployment code.Step 15
If your users can tolerate latency, deploy your model on Amazon SageMaker Asynchronous Endpoints with auto scaling groups to reduce idle resources between tasks and minimize the impact of load spikes.Step 16
When you don’t need real-time inference, use Amazon SageMaker Batch Transform. Unlike persistent endpoints, clusters are decommissioned when batch transform jobs finish.Step 17
Deploy multiple models behind a single Amazon SageMaker endpoint with auto scaling inference endpoints, which is more sustainable than deploying a single model behind one endpoint.Step 18
If your workload has intermittent or unpredictable traffic, use Amazon SageMaker Serverless Inference Endpoints, which automatically launch compute resources and scale depending on traffic.Step 19
Use AWS Inferentia to deploy your deep learning models, which provides up to 50% better performance per watt over comparable EC2 instances.Step 20
For Large Model Inference (LMI), use tensor parallelization available in the Deep learning containers for LMI to reduce latency.Step 21
Use Amazon Elastic Inference to attach the right amount of GPU-powered inference acceleration to any Amazon EC2 or SageMaker instance type.Step 22
Improve efficiency of your models by compiling them into optimized forms with Amazon SageMaker Neo.Step 23
Right-size your endpoints by using metrics from CloudWatch or Amazon SageMaker Inference Recommender, which recommends the proper instance type to host your model.Step 24
Monitor your ML model in production using SageMaker Model Monitor, automate model drift detection, and only retrain when predictive performance has fallen below defined key performance indicators (KPIs).
Well-Architected Pillars
The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
CloudWatch metrics and alarms monitor the health of model endpoints deployed on SageMaker hosting options, allowing you to record performance-related metrics, analyze metrics when events or incidents occur, establish KPIs to measure workload performance, and monitor and alarm proactively. Additionally, collecting and analyzing metrics for training jobs and inference environments using CloudWatch allows you to analyze workload health trends and conduct periodic workload metric reviews with your organization.
-
Security
AWS Identity and Access Management (IAM) controls access to resources and managed services to help ensure least privilege access, secure the ML environment, and protect against adversarial and malicious activities. Data is encrypted at rest on Amazon Simple Storage Service (Amazon S3) and SageMaker Feature Store, both of which use AWS Key Management Service (AWS KMS) to protect sensitive data.
-
Reliability
SageMaker allows automatic scaling of the model endpoint for reliable processing of predictions and to meet changing workload demands. It also distributes instances across Availability Zones in case an outage occurs or an instance fails. SageMaker Pipelines allows for versioned pipeline inputs and artifacts, and SageMaker Projects allows for versioned data processing code. This versioning helps you create a repeatable approach and retain data in case you need to roll back to a previous state.
-
Performance Efficiency
We selected the services in this Guidance to improve performance without compromising on accuracy of training results. For example, managed ML services, such as SageMaker, deliver better performance through pre-optimized ML components. SageMaker Inference Recommender increases performance while reducing inference time. High-compute instances, such as Trainium and Inferentia, can accelerate inference speed.
-
Cost Optimization
SageMaker services have built-in features that help you optimize costs related to model training. For example, SageMaker Feature Store helps avoid the cost of storing and processing duplicated datasets. SageMaker Debugger allows you to stop a training job as soon as a bug is detected, saving costs associated with unnecessary training job executions. SageMaker Training Compiler reduces training time and costs on GPU instances. Serverless pipelines, SageMaker Asynchronous Endpoints, and SageMaker Batch Transform avoid the cost of maintaining compute infrastructure at all hours of the day.
-
Sustainability
SageMaker Serverless Inference Endpoints and SageMaker Asynchronous Endpoints use autoscaling groups to scale resources in response to demand. SageMaker Serverless Inference Endpoints scale endpoints down to zero when there are no requests. This minimizes unnecessary provisioned resources and reduces carbon emissions. Additionally, serverless technologies, such as SageMaker Serverless Inference Endpoints and SageMaker Pipeline help eliminate idle resources by not having to spin up servers.
Implementation Resources
A detailed guide is provided to experiment and use within your AWS account. Each stage of building the Guidance, including deployment, usage, and cleanup, is examined to prepare it for deployment.
The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.
Related Content
[Title]
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.
References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.