This Guidance helps you accelerate your product development lifecycle by using AWS as the foundation for a model-based systems engineering (MBSE) approach to engineering and design. MBSE allows you to securely transform traditional document-based engineering environments to a modern model-based cloud computing platform. This multi-disciplinary and multi-application model helps aerospace companies adopt agile product development practices, connect with other aerospace teams across the globe, and gain the the support they need at every stage of MBSE adoption and automation.
Architecture Diagram
Step 1
Use AWS CloudFormation to deploy Scale-Out-Computing on AWS (SOCA) and build a centralized engineering environment (e.g., high-performance computing [HPC] and virtual desktop infrastructure [VDI]), where you can deploy MBSE tool. Bring your own MBSE tools, or find them on AWS Marketplace. (Option A.)
Step 2
Use Amazon AppStream 2.0 for non-persistent VDI or Amazon Workspaces for persistent VDI to access MBSE tool. Bring your own MBSE tools, or find them on AWS Marketplace. (Option B.)
Step 3
Use AWS IoT TwinMaker to create digital twins along with MBSE.
Step 4
Amazon API Gateway is the center of communications among applications and environments. API-based microservices integrate new technologies and complementary services. AWS AppSync is also an applicable option.
Step 5
Use Amazon EventBridge to trigger workflows based on all events, including MBSE.
Step 6
Amazon Simple Queue Service (Amazon SQS) ensures the message is processed. AWS Step Functions builds state machine-based workflows executed by AWS Lambda functions.
Step 7
Based on workflow selection, compute instances creates ephemeral simulations; Amazon DynamoDB tables track engineering activities; Amazon Simple Storage Service (Amazon S3) stores objects (output files) in the bucket; and Amazon Simple Notification Service (Amazon SNS) performs team communications.
Step 8
Store engineering documents and objects in a centralized data lake.
Step 9
Amazon Textract and Amazon Comprehend extract text from documents. Amazon OpenSearch Service unlocks insights. AWS Glue crawls and catalogs data for engineering use cases. Amazon Neptune creates ontology and multi-domain relationship knowledge graphs for artifacts and users.
Step 10
Artifact store, comprising OpenSearch Service, AWS Glue Data Catalog, and Neptune, feeds data back to MBSE tools on AWS via API Gateway.
Step 11
Amazon Elastic Compute Cloud (Amazon EC2) Auto Scaling, Amazon Elastic Load Balancing (ELB), Amazon Elastic Block Store (Amazon EBS), and Amazon EC2 deliver connectivity of heterogeneous enterprise apps and associated data models across design and operational environments.
Step 12
AWS provides lifecycle governance controls for permissioning, monitoring, and responding. To enable data supply for the US Government, reference Cross-Domain Solutions with AWS.
Step 13
AWS CodePipeline automates DevSecOps for continuous integration and continuous development (CI/CD) in MBSE workflows and operations.
Well-Architected Pillars
The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
Whether you are just starting out with MSBE as a tool or putting MSBE at the center of your enterprise strategy, this architecture provides the flexibility for you to get started. You can use Option A to incorporate MSBE into your existing environment or use tools like SOCA to centralize MSBE. The services in this architecture and the data lake approach enable centralized management and visibility for IT and security teams. Additionally, the architecture uses data analytics services to generate insights for engineering teams, so they can forecast how changes will impact larger systems.
-
Security
This architecture uses AWS Identity and Access Management (IAM) and Amazon CloudWatch to protect data. IAM provides role-based access control, giving data access privileges to only the roles that need it. With CloudWatch, you can set up metrics to monitor application activity from multiple AWS accounts within a Region.
-
Reliability
This architecture uses a microservices approach, which decouples services for a particular engineering function from services that support a different engineering function. By decoupling these services, you can experiment with new technologies for one function without altering the operability of other functions. The services in the Human-Machine Engineering Workflow capture, document, and respond to all events, maintaining a single “source of truth” for all event-based activity and communications.
-
Performance Efficiency
The services in this architecture allow for data interoperability across multiple stages of the data lifecycle. The AWS Management Console gives you visibility into data access patterns of your data, such as requests or changes to data and velocity or size of data. You can then build business logic based on traffic patterns and execute the logic with extensible APIs.
-
Cost Optimization
This architecture uses cost-saving features such as automation through CodePipeline, scalability through Amazon S3, and centralized administration through AWS Organizations. These features allow for early detection and correction of defects in the design process, which reduces total development costs and schedule overruns.
-
Sustainability
This architecture uses services that scale resources up and down based on usage. These services help monitor the throughput of the file system and dynamically adjust the throughput mode to “provisioned” or “bursting” to maximize resource optimization. With the “Detective” services in this architecture, you can visualize productivity metrics, emissions, or cost-out targets through dashboards and adjust business priorities to meet target metrics for sustainability.
Implementation Resources
A detailed guide is provided to experiment and use within your AWS account. Each stage of building the Guidance, including deployment, usage, and cleanup, is examined to prepare it for deployment.
The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.
Related Content
Computational Fluid Dynamics on AWS
DoD-Compliant Implementations in AWS
Model Based Systems Engineering (MBSE) on AWS: From Migration to Innovation
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.
References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.