This Guidance demonstrates how to launch simulation and machine learning compute tasks through distributed container orchestration, which automates the deployment of containers. One way is through TwinGraph, a workflow orchestration module that stores the metadata for each action in a graph that you can query for traceability and auditability. By using TwinGraph on AWS, you can build a hybrid, scalable solution that meets the technical requirements of your predictive modeling or simulation workloads without requiring you to write custom scripts for modeling frameworks.

Architecture Diagram

[Architecture diagram description]

Download the architecture diagram PDF 

Well-Architected Pillars

The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.

The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.

  • This Guidance uses CloudWatch to let you observe and monitor the tasks running on AWS Batch and other compute services where TwinGraph launches jobs. This helps you troubleshoot issues and inform users of resource use and operating status. Through CloudWatch metrics and logs, TwinGraph invokes an event-driven workflow chain automating processes and improving operational efficiency.

    Read the Operational Excellence whitepaper 
  • This Guidance lets you scope role-based access policies in AWS Identity and Access Management (IAM) to limit unauthorized access to resources. By using the principle of least privilege access for resource provisioning, retrieving CloudWatch logs, and pushing or pulling Amazon ECR containers, you can secure your TwinGraph workflow implementation.

    Read the Security whitepaper 
  • This Guidance stores the results of each task of a TwinGraph-launched workflow chain in an Amazon Neptune database. Neptune can restore the data after failure or help you resume workflows so that you don’t lose data in disaster events. Additionally, the restart capability of the Neptune graph’s leaf nodes facilitates graceful failure management.

    Read the Reliability whitepaper 
  • This Guidance uses AWS Batch to rightsize compute instances for specific workloads to improve performance efficiency, letting you set up the optimal configuration for your compute environment. When TwinGraph launches workloads, AWS Batch delegates the jobs based on available compute resources and different optimized instance types, depending on the workload’s complexity and compute requirements. AWS Batch delegates containerized tasks with a resilient queueing system and optimizes the computing node instances automatically.

    Read the Performance Efficiency whitepaper 
  • This Guidance uses TwinGraph with AWS Batch and Amazon ECS so that containerized workloads launch with the appropriate instance types, either through manual specification per job type or through optimal compute environments. AWS Batch queueing systems delegate tasks when resources are available to avoid overprovisioning resources, and they rightsize compute to match the workloads, resulting in cost savings.


    Additionally, this Guidance moves large result files to Amazon S3 buckets to further optimize costs. You can set up life cycle policies to move the data to optimal storage tiers, delete it periodically, or delete it when analysis is complete. Amazon ECR also has caching mechanisms and optimized storage for large Docker containers, saving time and reducing compute, download, and storage costs.

    Read the Cost Optimization whitepaper 
  • This Guidance lets you maximize the use of provisioned resources by rightsizing the compute instances with AWS Batch and Amazon ECS. It also lets you store large data results from ML or simulation tasks on Amazon S3, where you can use life cycle policies to transition the data to lower tiers or to automatically delete it after a set period or when analytics are complete. This helps you reduce unnecessary data processing and storage and decrease your total carbon footprint.

    Read the Sustainability whitepaper 

Implementation Resources

A detailed guide is provided to experiment and use within your AWS account. Each stage of building the Guidance, including deployment, usage, and cleanup, is examined to prepare it for deployment.

The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.

Disclaimer

The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.

References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.

Was this page helpful?