Moving EDA workloads to AWS cloud to speed Arm designs by 10x - Embedded.com

Moving EDA workloads to AWS cloud to speed Arm designs by 10x

Amazon Web Services (AWS) said Arm plans to move the majority of its electronic design automation (EDA) workloads to the cloud, potentially increasing throughput by up to 10x for semiconductor design and verification.

Design engineers have been naturally gravitating toward cloud platforms and even more so since the rapid transformation accelerated in 2020 to online working as a result of the Covod-19 restrictions around the world. Both AWS and Microsoft have significant scale and there are also services like Intel’s DevCloud.

So, the fact that Arm is migrating EDA workloads to AWS is a big move and makes it significantly easier for design engineers using Arm processors for their product development. Arm ultimately plans to reduce its global datacenter footprint by at least 45% and its on-premises compute by 80% as it completes its migration to AWS.

The platform leverages AWS Graviton2-based instances (powered by Arm Neoverse cores) and is expected to transform much of the semiconductor industry that traditionally uses on-premises data centers for the computationally intensive work of verifying semiconductor designs.

Arm_N1_PlatformTo carry out verification more efficiently, Arm will use the cloud to run simulations of real-world compute scenarios, taking advantage of AWS’ virtually unlimited storage and high-performance computing infrastructure to scale the number of simulations it can run in parallel. Since beginning its AWS cloud migration, Arm said it has realized a 6x improvement in performance time for EDA workflows on AWS. In addition, by running telemetry data analysis on AWS, Arm also said it is generating more powerful engineering, business, and operational insights that help increase workflow efficiency and optimize costs and resources across the company.

Highly specialized semiconductors clearly now power almost everything in modern life, from smartphones, to data center infrastructure, and work on future technologies in areas like self-driving vehicles. With each chip containing billions of transistors engineered down to the single-digit nanometer level (roughly 100,000x smaller than the width of a human hair), the goal is to drive maximum performance in minimal space.

EDA is one of the key technologies that make such extreme engineering feasible. EDA workflows are complex and include front-end design, simulation, and verification, as well as increasingly large back-end workloads that include timing and power analysis, design rule checks, and other applications to prepare the chip for production. These highly iterative workflows can take many months or even years to produce a new devices and systems on a chip (SoCs) and involve massive compute power. Semiconductor companies that run these workloads on-premises must constantly balance costs, schedules, and data center resources to advance multiple projects at the same time. As a result, they can face shortages of compute power that slow progress or bear the expense of maintaining idle compute capacity.

aws arm

By migrating its EDA workloads to AWS, Arm overcomes the constraints of traditionally managed EDA workflows and gains elasticity through massively scalable compute power, enabling it to run simulations in parallel, simplify telemetry and analysis, reduce its iteration time for semiconductor designs, and add testing cycles without impacting delivery schedules. Arm leverages Amazon elastic compute cloud (Amazon EC2) to streamline its costs and timelines by optimizing EDA workflows across the wide variety of specialized Amazon EC2 instance types.

For example, the company uses AWS Graviton2-based instances to achieve high-performance and scalability, resulting in more cost-effective operations than running hundreds of thousands of on-premises servers. Arm uses AWS compute optimizer, a service that uses machine learning to recommend the optimal Amazon EC2 instance types for specific workloads, to help streamline its workflows.

On top of the cost benefits, Arm leverages the high-performance of AWS Graviton2 instances to increase throughput for its engineering workloads, consistently improving throughput per dollar by over 40% compared to previous generation x86 processor based M5 instances. In addition, Arm uses services from AWS partner Databricks to develop and run machine learning applications in the cloud. Through the Databricks platform running on Amazon EC2, Arm can process data from every step in its engineering workflows to generate actionable insights for the company’s hardware and software groups and achieve measurable improvement in engineering efficiency.

The president of Arm’s IP group, Rene Haas, said, “Through our collaboration with AWS, we’ve focused on improving efficiencies and maximizing throughput to give precious time back to our engineers to focus on innovation. Now that we can run on Amazon EC2 using AWS Graviton2 instances with Arm Neoverse-based processors, we’re optimizing engineering workflows, reducing costs, and accelerating project timelines to deliver powerful results to our customers more quickly and cost effectively than ever before.”

Peter DeSantis, senior vice president of global infrastructure and customer Support at AWS, added,  “Graviton2 processors can provide up to 40% price performance advantage over current-generation x86-based instances.”


Related Contents:

For more Embedded, subscribe to Embedded’s weekly email newsletter.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.