This Guidance helps you orchestrate and deploy a Level 4 digital twin that is capable of self-calibration based on data from the physical entity and the environment. A Level 4 digital twin ingests Internet of Things (IoT) data and combines this data with probabilistic methods to adjust for maximum accuracy. This architecture provides easy integration of probabilistic methods with heterogeneous data sources to calibrate digital twin models and deliver predictive business outcomes. The modular framework integrates with data visualization capabilities that you can use to review key performance indicators and track other key metrics.

Please note: [Disclaimer]

Architecture Diagram

[Architecture diagram description]

Download the architecture diagram PDF 

Well-Architected Pillars

The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.

The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.

  • Batch is used in anticipation of a hardware or network failure. Batch will automatically determine if a container activation has failed and will retry to load the container, log, and observed issues in Amazon CloudWatch. CloudWatch allows you to analyze and troubleshoot issues that may occur. Infrastructure as Code (IaC) enables repeatable deployment minimizing errors, and Batch serves as a failsafe for anticipated failure.

    Additionally, Amazon Managed Grafana provides a dashboard to view data. AWS IoT SiteWise provides a serverless centralized database for collecting and monitoring sensors.

    Read the Operational Excellence whitepaper 
  • AWS Cloud Development Kit (AWS CDK) allows for version control of the architecture, setup and implementation of encryption, deployment of customized AWS Identity and Access Management (IAM) policies, and security auditing of deployment. AWS CDK enables both automation and knowledge transfer from security teams, supplementing knowledge gaps of users. IAM enables the concept of least privilege, which minimizes or eliminates the impact of malicious behavior by allowing only authorized access to resources.

    Read the Security whitepaper 
  • S3 buckets, Amazon ECR, and AWS IoT SiteWise provide data replication and durability by copying data to multiple Availability Zones (AZs). Replicating data across multiple AZs protects against potential reliability issues, such as hardware or network failures and power outages. Additionally, to address potential hardware failures, Batch helps ensure an automated system will move a workload to healthy infrastructure if a container fails to start.

    Read the Reliability whitepaper 
  • The specific tasks that run in Batch can vary significantly, and a single EC2 instance is often not optimal for performance. Batch alleviates this issue by automatically selecting the optimal EC2 instance based on RAM requirements and how many CPUs or GPUs are needed on a per-container basis.

    Amazon Managed Grafana enables data monitoring and alarms based on data. You can optimize your infrastructure setup or be alerted to changes in your infrastructure performance.

    Read the Performance Efficiency whitepaper 
  • Instead of purchasing infrastructure that is always on but underutilized, you can use EventBridge to initiate the procurement of EC2 instances in Batch. Batch will auto-terminate when tasks are complete, helping to minimize costs. Amazon Managed Grafana reviews infrastructure data and assesses if EC2 instances or network interfaces have been under or overprovisioned.

    Read the Cost Optimization whitepaper 
  • Batch enables hardware optimization by using the latest hardware. Batch automatically updates the hardware when AWS retires old instance types and attempts to find the correct performance instance that will not be under or overutilized. Batch also provides elasticity to scale down when not in use, reducing the carbon footprint of each workflow.

    Read the Sustainability whitepaper 

Implementation Resources

The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.

Disclaimer

The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.

References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.

Was this page helpful?