AWS Partner Network (APN) Blog

Boost Chip Design with AI: How Synopsys DSO.ai on AWS Delivers Lower Power and Faster Time-to-Market

By James Chuang, Product Manager, DSO.ai Fusion Compiler – Synopsys
By Pedro Gil, Sr. Solutions Architect, HPC – AWS

Synopsys-AWS-Partners-2024
Synopsys
Synopsys-APN-Blog-CTA-2024

One of the most exciting advancements in chip design to come about in the last decade is the application of artificial intelligence (AI).

Engineering teams are looking for ways to keep ahead of the fast pace of innovation, as System on Chip (SoC), have grown more complex and have driven massive growth in electronic design automation (EDA) workloads. With AI now handling repetitive tasks in the chip development cycle, engineers can focus more of their time on enhancing chip quality and differentiation.

Synopsys is an AWS Partner and AWS Marketplace Seller that’s harnessing the power of AI with its Synopsys.ai full-stack AI-driven EDA suite on Amazon Web Services (AWS).

One of the pioneering components of the solution is Synopsys DSO.ai, the semiconductor industry’s first autonomous AI application for chip design. Synopsys DSO.ai searches for optimization targets in very large solution spaces of chip design, utilizing reinforcement learning to enhance power, performance, and area (PPA).

Synopsys DSO.ai chip design benefits include:

  • Enhanced PPA: AI can take on exploration of these large design spaces—with an almost infinite number of design choices—to identify areas for optimization to enhance PPA of each unique design, in weeks rather than months.
  • Better productivity: By taking on iterative tasks, AI frees engineers to focus on chip design differentiation and quality and meeting time-to-market targets.
  • Support for reuse: AI drives even greater efficiencies into chip development processes by taking the results and learnings from one project and applying them to the next.
  • Faster design migration: Chip design teams can more quickly migrate their designs from one process node to another.

AWS ParallelCluster is an open-source cluster management tool that makes it easy to deploy and manage high-performance computing (HPC) clusters on AWS. It supports multiple instance types and job submission queues, and job schedulers like AWS Batch and Slurm. AWS ParallelCluster offers cloud advantages such as elasticity and fast setup are available to deliver optimal performance for massive EDA workloads.

This post describes chip design using Synopsys AI’s design space optimization tool (Synopsys DSO.ai) on AWS ParallelCluster for auto-scaling and cost reductions.

Solution Overview

High-performance computing resources are critical for running the Synopsys DSO.ai compute engine to its fullest potential. This type of AI computation could take 15-30 machines running for weeks at a time, compared to traditional human engineers that take only 3-5 machines but running for months, to achieve the PPA targets of a complex chip design. As a result, a private on-premises data center may not be most cost efficient.

Figure 1 – Synopsys DSO.ai reinforcement-learning model to generate desired outputs.

The time and resources required to increase compute power can be met either by adding more machines (a costly endeavor) or by scaling in the cloud. AWS ParallelCluster can help customers take full advantage of Synopsys DSO.ai by taking advantage of AWS features such as auto-scaling, elasticity, and fast setup to deliver optimal performance for massive EDA workloads.

Figure 2 – AWS ParallelCluster deployment architecture with Synopsys DSO.ai.

Performance and Cost Testing

For the purpose of this post, performance testing of DSO.ai using AWS ParallelCluster was done using three different AWS instance types, by comparing the average AWS ParallelCluster auto-scaling cost during the run of the test design. Synopsys DSO.ai performance outputs were the same as on-premises, but the cost factor and agility to implement were decisive factors to prefer AWS ParallelCluster as the recommended HPC platform.

Following are the instance types used for benchmarking Synopsys DSO.ai:

Instance Type Processor vCPU/GPU Memory (GiB) Instance Storage (GB)
r5d.8xlarge Intel Xeon Platinum 8000 Series 32 256  2×600 NVMe SSD
m6i.4xlarge  Intel Xeon 3rd Generation Ice Lake 16 64 EBS-only
g4dn.xlarge  AMD EPYC 7R32 8/1 32/8 1×150 NVMe SSD

Benchmarking configuration:

Operating System Centos 7
HPC Orchestrator AWS ParallelCluster v3.6.0
Desktop Visualization Tool Nice DCV
Synopsys DSO.ai Version 2022.12
Total Core Count 540

Cost Savings

AWS ParallelCluster allows you to launch and terminate clusters as needed. You only pay for the compute resources you use while the cluster is running, which can lead to significant cost savings compared to maintaining on-premises clusters that are always running.

You can also leverage Amazon EC2 Spot instances with AWS ParallelCluster to access spare AWS capacity at a significantly lower cost compared to On-Demand instances. Spot instances can save up to 90% on compute costs.

AWS ParallelCluster supports auto-scaling, which allows you to automatically add or remove instances based on workload demand. This ensures you have the right amount of capacity to handle your workload efficiently, minimizing idle resources and associated costs.

Figure 3 – Auto-scaling in AWS ParallelCluster.

You can also use AWS ParallelCluster’s integration with AWS Cost Explorer to gain insights into your cluster’s cost trends and identify areas where cost optimization is possible.

Figure 4 – AWS Cost Explorer test design running Synopsys DSO.ai.

It’s important to note actual cost savings will depend on factors such as the size of your workload, resource utilization patterns, instance types you choose, and your ability to effectively manage and optimize your cluster.

PPA Performance

The test design parameters comprised of a CPU block with 5nm technology and 150,000 instances. For compute, we used 30 EC2 instances with eight cores per job for two efforts on compile, clock, and route slices which yielded 75% reduction on total negative slack (TNS) and 21% on design rule check (DRC).

Synopsys-DSO-ai-5

Figure 5 – On-premises vs. AWS cloud – compile slice.

Also, below are a few examples of various design applications on technology nodes universally benefiting from AI-driven design:

 Design  Node  Benefits
GPU 5nm 9% better total power
AI Accelerator 5nm 8% better total power
HPC CPU 7nm 25% frequency boost
Embedded CPU 16nm 2X faster time-to-target
Mobile SoC 6nm 20% better leakage power
Mobile SoC 3nm 10% smaller area
Image Sensor 40nm 12% smaller area
PCIEX X16 5nm 7% smaller area

Conclusion

AWS high-performance compute (HPC) solutions can help customers take full advantage of Synopsys DSO.ai by reducing and optimizing costs and paying only for resources used during the time the workload is running. In an AWS ParallelCluster deployment, Synopsys DSO.ai autonomously delivered 20% lower power and timing closure on a Synopsys ARC compute core with significant cost savings.

Overall, Synopsys DSO.ai users report productivity enhancements of more than 3x, power reductions of up to 15%, and substantial die size reductions. Designers can achieve higher performance, lower power consumption, and smaller chip area with less manual effort.

.
Synopsys-APN-Blog-Connect-2024
.


Synopsys – AWS Partner Spotlight

Synopsys is an AWS Partner that’s harnessing the power of AI with its Synopsys.ai full-stack AI-driven EDA suite on AWS.

Contact Synopsys | Partner Overview | AWS Marketplace