AWS for Industries

Economics of EDA on AWS: License Cost Optimization

Introduction

Electronic Design Automation (EDA) workloads have traditionally run on-premises on a combination of latest and older generation compute servers. The performance penalty of running Electronic Design Automation (EDA) on older generation hardware is often neglected in discussions and Total Cost of Ownership (TCO) models. With EDA license costs greatly exceeding IT spend in silicon organizations, running EDA on older hardware comes at a cost. Complicating matters even further, many organizations continue to use compute servers that are older than their depreciation cycle which is then perceived as free in TCO calculations. With Amazon Web Services (AWS), customers have ability to realize significant EDA license cost savings by running workloads on latest generation compute.

In this blog we present a mathematical model representing EDA license cost savings when running EDA workloads with the latest generation of Amazon Elastic Compute Cloud (EC2) as compared to running in an on-premises environment with multiple generations of compute servers. Running EDA on current generation hardware on AWS results in faster time to market with quicker job turn-around times. This translates to direct revenue gain and competitive edge for customers. We invite readers to benchmark Amazon EC2 for EDA and exercise the presented model to determine license cost savings with AWS.

EDA Compute Cluster Efficiency Model in a Typical Datacenter

EDA license cost in silicon organizations is often 5-10x IT infrastructure cost. This ratio is skewed in large semiconductor companies, who often have negotiated enterprise license agreement for EDA tools. Even in such cases, as stated by Intel, license spend overshadows IT infrastructure spend. Thus, the need to maximize EDA license utilization is key to reducing cost of silicon design. On-premises EDA environments using older generation servers run at sub-100% efficiency and result in sub-optimal license utilization.

EDA workloads are computationally intensive. Job turnaround time is dependent upon the type of compute servers being used to run these workloads.  Users pay a performance penalty for the jobs running on older generation processors. While most users intuitively recognize this to be true, few studies have quantified the impact of running EDA on older generation compute. Our experience with customers indicate that on-premises EDA deployments add 20-40% latest generation compute capabilities to their existing footprint annually. Inability to rapidly add compute to keep up with increasing design complexity and advanced manufacturing technology results in retaining older generation hardware in on-premises EDA clusters. It is typical to see 4-6 generations old hardware in use in on-premises EDA environments.

Running on older generation hardware has major impact on EDA tool performance, as illustrated by Intel in this whitepaper. As mentioned in the article, per-core EDA Relative Performance over four generations of Intel servers is depicted in Figure-1 below:

With the above model as performance baseline, and assumption that each of the last 4 generations of CPUs contribute to roughly 25% of EDA compute footprint on-premises, we present a simple mathematical model to represent overall performance efficiency of an on-premises EDA compute cluster in terms of current generation cores. Normalizing on current core architectures (Performance Factor=1), we come up with the relative performance of an on-premises EDA compute environment as illustrated in the table below.

Thus, we conclude that an on-premises cluster with roughly 25% core distribution of current and older generation hardware is operating at 77.25% performance efficiency compared to 100% efficiency attained by running all latest generation compute. The reduced performance efficiency impacts EDA tool efficiency and cost.

Mathematical Model for EDA License Cost Optimization with AWS

As illustrated in Figure 2 (EDA Compute Cluster Efficiency Model), compute clusters running a mix of latest generation and older hardware deliver only 77.25% performance compared to a maximum of 100% possible with latest generation hardware. This general scenario is typical for how on-premises EDA environments evolve over time. While calculating the cost of running EDA workloads in such environments, even including the zero-cost of fully depreciated servers, the performance and cost penalty of EDA tools running at on older generation compute is often neglected. With EDA tool license cost dominating operational costs, this factor can be significant.

AWS EC2 provides access to current generation processors in virtually infinite capacity, deployable across the globe with a few clicks of a button. Latest generation of x86 and ARM instances on AWS help realize maximum performance and full realization of EDA license value. EDA environment with latest generation hardware on AWS, optimized for performance is depicted in the figure below:

For illustrative purposes, if we assume compute cost of $0.06 per core-hour with 100% core utilization, in a sample EDA environment comprising 15,000 cores and USD $20 million EDA license spend, license cost savings can be up to $4.55 million. This math is depicted in the table below.

The above illustrative model reflects license cost savings of $4.55M. It further highlights the penalty paid for running EDA on older generation hardware on-premises and benefit AWS provides to silicon design teams. Amazon EC2 offers flexible purchasing options to further enable compute cost optimization based on your needs.

As the above model demonstrates, with cost per core-hour being the same on-premises and on AWS, there is overall cost savings of 22.75% with efficient use of EDA licenses by running EDA compute on latest generation EC2 instances on AWS. If the goal is to have the same performance in cloud and on-premises setups, this could be attained with 77.25% of the latest generation instances on AWS compared to on-premises.

Conclusion

As we have illustrated above, there can be a significant cost to running EDA workloads on older generations of compute hardware. This impact is often neglected in TCO calculations, but it can have a major impact on final results. A simple model has been presented for the reader to calculate this impact in their own environment.

Without a cost breakdown over time, customers may have difficulty choosing the optimal purchasing option for their use case, resulting in over-spending. To avoid struggling with estimating costs of running EDA workloads on cloud, you can analyze the hourly cost structure of past, on-premises workload resource allocation, without the need to share sensitive data with AWS. This can be done with the open source HPC cost simulator, enabling customers to choose an optimal compute purchase option from day one for cost optimization of their EDA workloads. Please visit the AWS for Semiconductor page for more information on how AWS can help you with its solutions.

Kartik Gopal

Kartik Gopal

Kartik Gopal is a Sr. Solutions Architect at AWS and specializes in helping semiconductor customers adopt AWS for their design needs. Kartik’s domain knowledge comes from a combination of authoring EDA tools for silicon design and helping enterprises adopt cloud in his 17 years of industry experience.

Ratna Dasari

Ratna Dasari

Ratna Dasari is a Solutions Architect in the Enterprise SA team at AWS. She works closely with semiconductor customers in building and architecting mission critical SAP workloads. Ratna worked in IT organizations at semiconductor companies for over 20 years. She has extensive experience in leading, architecting and implementing SAP technology solutions.

Ravi Poddar

Ravi Poddar

Dr. Ravi Poddar is a Senior Leader and Advisor for the Semiconductor Industry at Amazon Web Services. With over 25 years of experience in semiconductor design, he works closely with organizations in the industry to accelerate their transformation to the cloud. Ravi has held Director of Engineering positions at Pure Storage, Integrated Device Technology and Transmeta, working in the areas of ASIC and custom design methodology and physical design, both in bulk CMOS and SOI. Dr. Poddar received his BSEE with highest honors, MSEE and Ph.D. degrees from the Georgia Institute of Technology. He has over 20 publications in the areas of compute and storage optimization, parasitic extraction, circuit and device modeling, analog/mixed signal circuit design and machine learning/neural networks.

Umar Shah

Umar Shah

Umar Shah is the Head of Solutions at Amazon Web Services focused on the Semiconductor and Hitech industry workloads and has worked in the Silicon Valley for over 26 years. Prior to joining AWS, he was the ECAD manager at Lab126 where he created and delivered business and engineering best practices for Amazon EE teams. He has extensive experience in electronic sub-systems design, EDA design flow optimization, application engineering, project management, technical sales, technical writing, documentation & multimedia development, business development & negotiations, customer relations and business execution.