[SEO Subhead]
This Guidance demonstrates how to run workbench test results and parameters on AWS, which can help reduce the number of tests and time it takes to get to results. With this Guidance, chip designers and contract manufacturers can run test jobs and quickly analyze test results using AWS services. They will be able to run workloads on the cloud that are similar to what they would run in on-premises infrastructure, but with reduced time and cost.
Please note: [Disclaimer]
Architecture Diagram
[Architecture diagram description]
Step 1
A tester stages data on the data repository and uses the data uploader to upload data to an Amazon Simple Storage Service (Amazon S3) bucket for centralized storage.
Step 2
Amazon Elastic Compute Cloud (Amazon EC2) instances along with Amazon EC2 Auto Scaling parse data into test files. Data is stored in the S3 bucket and a database in Amazon DocumentDB.
Step 3
A user submits a Dynamic Parameter Reduction (DPR) request through Amazon API Gateway. An AWS Lambda function stores the request in the task queue and state store in an Amazon Aurora database.
Step 4
The broker (EC2 instance) pulls the DPR request from the task queue, queries data from Amazon DocumentDB, updates the requested input state store, and invokes the controller.
Step 5
The controller, running on Amazon Elastic Container Service (Amazon ECS), initiates Lambda, which is running the file formatter, to prepare data in the S3 bucket. The file formatter updates the status in the Aurora state store.
Step 6
The controller also initiates the correlation service running on AWS Glue, and the results are sent to the Optimizer (Amazon ECS). The S3 bucket stores optimized results.
Step 7
The broker is notified that the DPR job completed from state store, and the user receives a notification email.
Step 8
User(s) collect results from the S3 bucket.
Step 9
After the results are reviewed, the user adjusts parameters and provides these new parameters to testers to drive future testing.
Well-Architected Pillars
The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
Lambda helps you scale your workbench testing process through serverless architecture. API Gateway helps you build reliable production workbench endpoints that allow for easy migration from one API to the next and the ability to connect with these endpoints from anywhere in the world.
-
Security
By using AWS Virtual Private Network (AWS VPN), you can secure your connections when transferring data and also while using the AWS console. AWS VPN is a flexible way to create secure connections between on-premises hardware and AWS.
-
Reliability
Data parsing is often the most compute-intensive step of this architecture diagram. By using Amazon EC2 Auto Scaling, you can scale compute resources as needed and automatically replace failed instances. This reduces the amount of time it takes to parse data so that spikes in demand or failed instances don’t interrupt your workloads.
-
Performance Efficiency
Using AWS Glue, you can consolidate, optimize, and streamline workbench data. This helps you avoid lengthy delays and eliminate redundancies. AWS Glue requires little to no prior experience, so you can have this service up and running in minutes.
-
Cost Optimization
With Amazon EC2 Auto Scaling, workbench testing will only use what is required and will shut down compute resources when they are no longer needed, meaning you no longer have to pay for idle resources. Additionally, Amazon DocumentDB is a managed service, which helps you save on administrative and infrastructure costs traditionally associated with running your own database instances.
-
Sustainability
Rather than having idle resources using up power and cooling, you can reserve compute through Compute Savings Plans, so you use only what your workloads require. Additionally, using Lambda to run serverless functions helps you consume only a fraction of the compute resources typically required for similar workloads that run in on-premises environments.
Implementation Resources
A detailed guide is provided to experiment and use within your AWS account. Each stage of building the Guidance, including deployment, usage, and cleanup, is examined to prepare it for deployment.
The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.
Related Content
[Title]
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.
References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.