Guidance for Orchestrating Simulation & Machine Learning Workloads on AWS
Overview
How it works
This architecture diagram demonstrates how to launch simulation and machine learning workloads through scalable and distributed graph orchestration of containerized compute tasks for analyzing the results.
Well-Architected Pillars
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
Operational Excellence
This Guidance uses CloudWatch to let you observe and monitor the tasks running on AWS Batch and other compute services where TwinGraph launches jobs. This helps you troubleshoot issues and inform users of resource use and operating status. Through CloudWatch metrics and logs, TwinGraph invokes an event-driven workflow chain automating processes and improving operational efficiency.
Read the Operational Excellence whitepaperSecurity
AWS Identity and Access Management (IAM) to limit unauthorized access to resources. By using the principle of least privilege access for resource provisioning, retrieving CloudWatch logs, and pushing or pulling Amazon ECR containers, you can secure your TwinGraph workflow implementation."> This Guidance lets you scope role-based access policies in AWS Identity and Access Management (IAM) to limit unauthorized access to resources. By using the principle of least privilege access for resource provisioning, retrieving CloudWatch logs, and pushing or pulling Amazon ECR containers, you can secure your TwinGraph workflow implementation.
Read the Security whitepaperReliability
This Guidance stores the results of each task of a TwinGraph-launched workflow chain in an Amazon Neptune database. Neptune can restore the data after failure or help you resume workflows so that you don’t lose data in disaster events. Additionally, the restart capability of the Neptune graph’s leaf nodes facilitates graceful failure management.
Read the Reliability whitepaperPerformance Efficiency
This Guidance uses AWS Batch to rightsize compute instances for specific workloads to improve performance efficiency, letting you set up the optimal configuration for your compute environment. When TwinGraph launches workloads, AWS Batch delegates the jobs based on available compute resources and different optimized instance types, depending on the workload’s complexity and compute requirements. AWS Batch delegates containerized tasks with a resilient queueing system and optimizes the computing node instances automatically.
Read the Performance Efficiency whitepaperCost Optimization
This Guidance uses TwinGraph with AWS Batch and Amazon ECS so that containerized workloads launch with the appropriate instance types, either through manual specification per job type or through optimal compute environments. AWS Batch queueing systems delegate tasks when resources are available to avoid overprovisioning resources, and they rightsize compute to match the workloads, resulting in cost savings.
Additionally, this Guidance moves large result files to Amazon S3 buckets to further optimize costs. You can set up life cycle policies to move the data to optimal storage tiers, delete it periodically, or delete it when analysis is complete. Amazon ECR also has caching mechanisms and optimized storage for large Docker containers, saving time and reducing compute, download, and storage costs.
Sustainability
This Guidance lets you maximize the use of provisioned resources by rightsizing the compute instances with AWS Batch and Amazon ECS . It also lets you store large data results from ML or simulation tasks on Amazon S3 , where you can use life cycle policies to transition the data to lower tiers or to automatically delete it after a set period or when analytics are complete. This helps you reduce unnecessary data processing and storage and decrease your total carbon footprint.
Read the Sustainability whitepaperDisclaimer
Did you find what you were looking for today?
Let us know so we can improve the quality of the content on our pages