Guidance for Multi-Account Machine Learning Model Governance on AWS
Overview
How it works
These technical details feature an architecture diagram to illustrate how to effectively use this solution. The architecture diagram shows the key components and their interactions, providing an overview of the architecture's structure and functionality step-by-step.
Deploy with confidence
Ready to deploy? Review the sample code on GitHub for detailed deployment instructions to deploy as-is or customize to fit your needs.
Well-Architected Pillars
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
Operational Excellence
The services used throughout this Guidance collectively provide a comprehensive, automated, and scalable infrastructure for the entire ML lifecycle. Specifically, SageMaker offers a fully managed ML environment, streamlining workflows from data preparation to deployment and monitoring. Lambda and EventBridge enable serverless compute and event-driven automation, reducing manual intervention and potential errors. DynamoDB provides a flexible database for model metadata, while QuickSight offers dashboarding capabilities for monitoring and governance. Amazon S3 and Amazon ECR provide scalable storage offerings for model artifacts and container images, helping ensure consistent performance as demands grow. Lastly, API Gateway manages API creation and management for model invocation, facilitating smooth integration with other systems.
Read the Operational Excellence whitepaperSecurity
By providing mechanisms for access control, data protection, and secure artifact storage, the services used in this Guidance work in tandem to protect your information and systems. For example, SageMaker offers built-in security features for ML workflows, including isolation of notebook instances and model endpoints. AWS Identity and Access Management (IAM) enables fine-grained access control and permission management across AWS services. Amazon S3 provides secure object storage with encryption capabilities for model artifacts. And lastly, Amazon ECR offers secure storage for container images with access controls and encryption in transit.
Read the Security whitepaperReliability
The services selected for this Guidance are designed to handle varying workloads and potential failures gracefully. For instance, the managed infrastructure of SageMaker reduces the operational burden of maintaining ML environments. The serverless nature of Lambda helps ensure that compute resources are always available without manual scaling. The distributed architecture of DynamoDB provides high availability for model metadata storage. Moreover, Amazon S3 is designed for 99.999999999% (11 nines) durability, protecting your critical model artifacts. Finally, Amazon ECR helps ensure that container images are always accessible for deployment and API Gateway handles traffic spikes with consistent model access.
Read the Reliability whitepaperPerformance Efficiency
SageMaker offers a high-performance infrastructure for model training and inference with the ability to automatically select the most efficient instance types. Lambda enables rapid execution of functions with near-instantaneous scaling. DynamoDB provides single-digit millisecond latency for data retrieval at any scale. Amazon S3 offers high-throughput access to model artifacts. Amazon ECR ensures fast and consistent deployment of container images. And API Gateway provides low-latency API management for model invocation. These services collectively offer optimized, scalable, and low-latency approaches for various aspects of the ML pipeline.
Read the Performance Efficiency whitepaperCost Optimization
By using serverless and managed services, your organization can avoid the capital expenses associated with owning and maintaining physical infrastructure. One example of such a service is SageMaker that provides automatic scaling so resources are used efficiently during model training and inference. Lambda offers a serverless model, meaning you don’t pay for idle compute time, which is particularly beneficial for intermittent workloads in the ML pipeline. Additionally, DynamoDB includeson-demand capacity mode where the database automatically scales up and down, optimizing costs for unpredictable workloads.
Read the Cost Optimization whitepaperSustainability
Efficient, scalable, and serverless computing options allow you to optimize resource usage, minimizing the environmental impacts of running cloud workloads. SageMaker offers managed ML infrastructure that can automatically scale resources based on demand, reducing idle capacity. Lambda provides serverless compute that only consumes resources when functions are executed. Finally, DynamoDB is a serverless database that scales automatically, ensuring efficient use of resources.
Read the Sustainability whitepaperDisclaimer
Did you find what you were looking for today?
Let us know so we can improve the quality of the content on our pages