[SEO Subhead]
This Guidance shows how you can streamline and accelerate product development by building an MBDE for engineering and design. Using AWS as the foundation, you can create a modern cloud computing platform that is more secure, agile, and lightweight than on-premises document-based engineering environments. You can also use MBDE-generated models and data to build advanced analytics and generative models for predicting system behavior. The MBDE approach centralizes management of your tools, helping you identify product development risks early, improve overall development performance, and streamline collaboration with stakeholders.
Please note: [Disclaimer]
Architecture Diagram
[Architecture diagram description]
Step 1
Option A: Use AWS CloudFormation to deploy Research and Engineering Studio on AWS (RES) within a virtual private cloud (VPC) and build a centralized MBDE—for example, a high-performance computing and virtual desktop infrastructure (VDI)—where you can deploy MBDE tools. Bring your own MBDE tools, or find them in AWS Marketplace.
Step 2
Option B: Use Amazon AppStream 2.0 for selectively-persistent VDI or Amazon WorkSpaces for persistent VDI to access the MBDE. (Bring your own MBDE tools, or find them in AWS Marketplace.)
Step 3
Use AWS IoT TwinMaker to connect data streams from MBDE to build and deploy digital twins of real-world systems for use in engineering and design.
Step 4
Amazon API Gateway is the center of communications among applications and environments. API-based microservices integrate new technologies and complementary services.
Step 5
Use Amazon EventBridge to invoke workflows of all events, including the MBDE.
Step 6
Amazon Simple Queue Service (Amazon SQS) makes sure the message is processed. AWS Step Functions builds state machine-based workflows implemented by AWS Lambda functions.
Step 7
Based on the workflows implemented in the prior step, Amazon Simple Storage Service (Amazon S3) stores objects (output files) in a bucket; compute instances create ephemeral simulations; Amazon DynamoDB tables track engineering activities, and Amazon Simple Notification Service (Amazon SNS) performs team communications.
Step 8
Store engineering docs and objects in a centralized data lake on Amazon S3.
Step 9
Amazon Textract and Amazon Comprehend extract text from documents. Amazon OpenSearch Service unlocks insights. An AWS Glue crawler catalogs data for engineering use cases. Amazon Neptune creates ontology and multidomain relationship knowledge graphs for artifacts and users.
Step 10
The artifact store (OpenSearch Service, AWS Glue Data Catalog, and Neptune) feeds data to MBDE tools through API Gateway.
Well-Architected Pillars
The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
This Guidance uses CloudFormation to deploy engineering resources through RES so you can build a centralized design environment. It models a collection of resources as a single unit, so you don’t have to manage resources individually. Additionally, AppStream 2.0 launches a fresh virtual desktop upon user login for consistent application settings and storage connections, and you can use WorkSpaces to centrally manage your persistent cloud desktops.
-
Security
This Guidance uses AWS Identity and Access Management for fine-grained permissions and role-based access. AWS Key Management Service (AWS KMS) gives you centralized control over the cryptographic keys used to protect your data.
-
Reliability
This Guidance uses managed services for compute needs such as Lambda, Step Functions, and AWS Glue crawlers. These services support fault tolerance by automatically detecting and replacing unhealthy compute instances and scaling as the workload demand grows. Similarly, this Guidance also uses managed services for storage such as Amazon S3, Neptune, OpenSearch Service, and DynamoDB, which are designed to be highly reliable, available, and durable for mission critical workloads. These services offer multi-Availability Zone (AZ) deployment, read replicas, minimal downtime for software updates and upgrades, fault-tolerant storage, and continuous and incremental backups for point-in-time recovery.
-
Performance Efficiency
This Guidance uses EventBridge to build event-driven architectures and create point-to-point integrations without writing custom code or managing servers. You can use Step Functions to automate processes and orchestrate microservices and AWS IoT TwinMaker to create digital twins of real-world systems without needing to re-ingest or move data. Finally, Amazon SQS provides capacity planning and infrastructure maintenance.
-
Cost Optimization
With Lambda, you pay only for requests served and the compute time required to run your code. AppStream 2.0 lets you pay only for the desktop-as-a-service (DaaS) resources that you provision, plus a small monthly fee per end user, depending on the operating system. The fees for WorkSpaces include both infrastructure and the software applications listed in the bundle.
-
Sustainability
DynamoDB automatically scales tables to reduce resource usage, and Amazon S3 supports sustainability through optimized access patterns and storage tiers. Tiered storage allows you to store data based on how frequently you need to access it. For example, archived data will require fewer storage resources, which helps you minimize your workload's overall environmental impact.
Implementation Resources
A detailed guide is provided to experiment and use within your AWS account. Each stage of building the Guidance, including deployment, usage, and cleanup, is examined to prepare it for deployment.
The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.
Related Content
DoD-Compliant Implementations in AWS
Model Based Systems Engineering (MBSE) on AWS: From Migration to Innovation
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.
References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.