[SEO Subhead]
This Guidance demonstrates how to use AWS services for facilitating data movement and semiconductor design workflows. It provides an overview architecture that shows you how to set up this workflow using a similar infrastructure model as your on-premises data centers. By using this Guidance, you can easily set up your on-premises workloads for semiconductor design workflows on the AWS Cloud.
Please note: [Disclaimer]
Architecture Diagram
[Architecture diagram description]
Step 1
Determine what data is needed for proof of concept or test.
Step 2
Transfer data required for design or tests from the on-premises data center into Amazon Simple Storage Service (Amazon S3) through AWS Snowball or AWS Direct Connect.
Step 3
Users connect to a remote desktop or command line environment on AWS with the NICE DCV server and manage access with a login server. Simplify access from on-premises and remote user access with AWS Directory Service.
Step 4
Manage licenses and jobs supporting design and testing workflows with Amazon Elastic Compute Cloud (EC2).
Step 5
Run workflows with Amazon EC2 and AWS Lambda using data stored in Amazon S3.
Step 6
Store tools and job data processed on infrastructure to support Amazon EC2 instances on Amazon FSx for NetApp ONTAP, Amazon FSx Open ZFS, and Amazon FSx for Lustre.
Step 7
Move data from Amazon FSx services to Amazon S3 as needed to leverage the range of storage classes.
Step 8
Connect Amazon Virtual Private Cloud (Amazon VPC) securely to fabrication, third-party IP providers, and collaborators with AWS Transit Gateway and AWS PrivateLink.
Step 9
Analyze wafer data with the suite of AWS analytics services.
Step 10
Optimize job queues and license usage using AWS AI/ML services to perform near real-time inference in the cloud or on-premises.
Well-Architected Pillars
The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
AWS analytics services, such as Amazon OpenSearch Service or Amazon QuickSight, help you build business intelligence dashboards that provide the actionable insights required to quickly react to changes in the semiconductor design environment.
-
Security
AWS provides the tools to protect data at rest and in transit. AWS Key Management Service (AWS KMS) integrates with AWS storage services to encrypt data at rest. You can encrypt network traffic by using AWS Virtual Private Network (AWS VPN) connectivity or Direct Connect. AWS PrivateLink provides secure connectivity to third-party organizations without exposing data to the internet. You can use federated access to directory services on-premises—the authentication applies to the entire architecture, from the remote desktop to running batch jobs.
-
Reliability
With Transit Gateway, you can design and build a highly available network topology to connect on-premises, AWS Partner, and cross-Region networks. Transit Gateway supports multiple user gateway connections to implement redundant AWS VPN connections for failover.
-
Performance Efficiency
You can automatically provision compute resources to meet compute requirements through AWS Auto Scaling. Workload schedulers integrate with Amazon EC2 to provision appropriate resources for the workload.
-
Cost Optimization
AWS Auto Scaling allows you to automatically provision compute resources to execute user jobs. Adding a workload scheduler and license manager, you can manage resources dynamically within design workflows. Compute resources will be provisioned only when needed and when licenses are available.
-
Sustainability
Workload schedulers can integrate with license managers and Amazon EC2 service endpoints to launch compute resources. If there is a sudden increase in demand (or a burst workload), the workload scheduler will launch the desired capacity based on the configuration. When the job is done, the scheduler will terminate idle resources.
Implementation Resources
A detailed guide is provided to experiment and use within your AWS account. Each stage of building the Guidance, including deployment, usage, and cleanup, is examined to prepare it for deployment.
The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.
Related Content
Run Semiconductor Design Workflows on AWS
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.
References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.