Guidance for Orchestrating Simulation & Machine Learning Workloads on AWS
Overview
This Guidance demonstrates how to launch simulation and machine learning compute tasks through distributed container orchestration, which automates the deployment of containers. One way is through TwinGraph, a workflow orchestration module that stores the metadata for each action in a graph that you can query for traceability and auditability. By using TwinGraph on AWS, you can build a hybrid, scalable solution that meets the technical requirements of your predictive modeling or simulation workloads without requiring you to write custom scripts for modeling frameworks.
How it works
This architecture diagram demonstrates how to launch simulation and machine learning workloads through scalable and distributed graph orchestration of containerized compute tasks for analyzing the results.
Well-Architected Pillars
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
Related Content
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.
Did you find what you were looking for today?
Let us know so we can improve the quality of the content on our pages