Skip to main content

Guidance for Running Semiconductor Design Workflows on AWS

Overview

This Guidance demonstrates how to use AWS services for facilitating data movement and semiconductor design workflows. It provides an overview architecture that shows you how to set up this workflow using a similar infrastructure model as your on-premises data centers. By using this Guidance, you can easily set up your on-premises workloads for semiconductor design workflows on the AWS Cloud.

How it works

This architecture diagram shows data movement options for semiconductor design workflows using AWS services.

Well-Architected Pillars

The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.

AWS analytics services, such as Amazon OpenSearch Service or Amazon QuickSight, help you build business intelligence dashboards that provide the actionable insights required to quickly react to changes in the semiconductor design environment.

Read the Operational Excellence whitepaper

AWS provides the tools to protect data at rest and in transit. AWS Key Management Service (AWS KMS) integrates with AWS storage services to encrypt data at rest. You can encrypt network traffic by using AWS Virtual Private Network (AWS VPN) connectivity or Direct Connect. AWS PrivateLink provides secure connectivity to third-party organizations without exposing data to the internet. You can use federated access to directory services on-premises—the authentication applies to the entire architecture, from the remote desktop to running batch jobs.

Read the Security whitepaper

With Transit Gateway, you can design and build a highly available network topology to connect on-premises, AWS Partner, and cross-Region networks. Transit Gateway supports multiple user gateway connections to implement redundant AWS VPN connections for failover.

Read the Reliability whitepaper

You can automatically provision compute resources to meet compute requirements through AWS Auto Scaling. Workload schedulers integrate with Amazon EC2 to provision appropriate resources for the workload.

Read the Performance Efficiency whitepaper

AWS Auto Scaling allows you to automatically provision compute resources to execute user jobs. Adding a workload scheduler and license manager, you can manage resources dynamically within design workflows. Compute resources will be provisioned only when needed and when licenses are available.

Read the Cost Optimization whitepaper

Workload schedulers can integrate with license managers and Amazon EC2 service endpoints to launch compute resources. If there is a sudden increase in demand (or a burst workload), the workload scheduler will launch the desired capacity based on the configuration. When the job is done, the scheduler will terminate idle resources.

Read the Sustainability whitepaper

Disclaimer

The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.