Trends driving the future of high-performance computing (HPC) - Embedded.com

Trends driving the future of high-performance computing (HPC)

Synopsys executives take a look at the top predictions on new markets HPC will enter this year, such as edge HPC, security, and evolving architectures like 3DIC and chiplets that will support HPC applications.

In 2021 we witnessed how the Covid-19 pandemic was affecting the high-performance computing (HPC) and data center industries in real time. While everyone learned to master the art of working and studying remotely, the demand for more computing power and less latency pushed the growth trajectory for the HPC market even further than experts would have predicted back in early 2020.

As the dust has settled and we’ve all gotten used to the “new normal,” we’ve become accustomed to new ways of work, methods of learning, and approaches to social interaction in general. This overall flexibility and advanced approach to data processing and data sharing are going to continue into next year, making everyone more productive, information more accessible, and collaboration more seamless. As a result, we will continue to see the HPC market evolve and expand, with more industries requiring more interconnected silicon architectures as well as high-speed networking.

Here are our top predictions on new markets HPC will enter this year, the increasing importance of security, and the evolving architecture that will support HPC applications.

Last year, we saw HPC being used to create vaccines that protect against Covid-19. We’ll continue to see it being used in medical research and monitoring, but HPC will also extend into even newer markets in 2022.

Scott Durrant: We’re seeing an increase in the number of catastrophic climate events in the United States and throughout the world, and it’s becoming more important to be able to forecast those to protect people from them. That’s an application that is going to have a lot of focus in the HPC space in the coming year. In addition to that, we’re going to see a lot more use of HPC for consumer-oriented applications, driven by the availability of HPC in the cloud. Historically, high-performance data centers have been isolated and only available to research organizations, the government, and companies that have very large budgets. We’ll start to see development of virtual worlds or the “metaverse” as it’s recently starting to be called, both for recreation, like gaming (augmented reality and virtual reality) as well as for simulations like digital twins.

Ruben Molina: You can make the case that every couple of years what used to be thought of as HPC becomes mainstream. I predict that HPC at the edge is going to be more of the rule than the exception. The industrial sector is going to utilize HPC for applications in robotics, vision systems, and preventative maintenance and monitoring, such as pre-determined or predictive failures on assembly lines — essentially, all industrial areas that need computing power right where the devices are being employed in order to reduce downtime needs.

Susheel Tadikonda: The HPC market is expanding with new types of work, adding artificial intelligence (AI) and data analytics to traditional simulation and modeling. The rise of COVID-19 has emphasized the need for flexible and scalable HPC solutions in the cloud. This along with the increasing need in various industry verticals (life sciences, automotive, finance, gaming, manufacturing, aerospace, etc.) for faster data processing with higher levels of accuracy will be a major factor driving the growth of HPC adoption in the coming years. Technologies such as AI, edge computing, 5G, and Wi-Fi 6 will broaden the capabilities of HPC, leading to new chip/system architectures that will deliver high processing and analytical capabilities to various sectors.

Increasing HPC security is vital for new designs

The amount of data that will be processed next year will increase exponentially, and so will the value and sensitivity of that data. Making sure that security is an essential component (rather than an afterthought) when designing for HPC components will be one of the top design challenges that engineers will face this year and every year moving forward.

Susheel Tadikonda: HPC systems contain highly customized hardware and software stacks that are tuned for performance optimization, power efficiency, and interoperability. Designing and securing such systems with their own use modes and distinctive components/attributes makes it different from other types of general-purpose computing systems. Security threats are not just limited to network/storage data compromise, but also include side channel attacks like data pattern inference from power states, emissions, and processor wait times. We will see a lot more innovation around memory and storage technology, intelligent interconnects, silicon-enabled security, and cloud security to efficiently manage massive data volumes. Security verification/validation will represent one of the most critical parts of security assurance, spanning the architecture, design, and post-silicon components of the system lifecycle.

Scott Durrant: There’s a significant increase in the importance of securing information, protecting the confidentiality and integrity of data, and providing access controls to data. We’ve seen over this past year the sorts of problems that can occur from ransomware and other cyberattacks. There will be an increasing number of attacks as there’s a greater value of the data in the infrastructure, and so providing security from the hardware upward through all levels of the stack to protect that information is going to be more and more important.”

Scott Knowlton: The zero-trust framework will also become more adopted. This means that people coming in and wanting access to data need to validate who they are and prove that they are authorized to access the data. We expect that will ramp up even more over the next year or so. In fact, we’re already seeing the underpinning of some of the necessary hardware. Additionally, we’ll see embedded roots of trust in each of the elements within an infrastructure. It allows them to be able to authenticate one another and ensure that before data is shared with another device, the device is authorized to utilize and process that data.”

Ruben Molina: The more digitization that happens across many markets, the more opportunities there are for security risks. Because increased high-performance computation is moving farther from the data centers, there’s going to be an increasing number of opportunities for attacks that can’t be completely mitigated with software patches. This is going to put a lot of pressure on design teams to rush out hardware to solve these problems which will result in accelerated hardware design cycles. Increasing designer productivity to keep pace with time-to-market demands is going to be a critical need.

Explosion of disaggregated architectures and heterogenous systems

As the amount of data increases, it’s not just security that needs to be considered. Storage infrastructure will have to increase as well as the compute capability to process that data. New architectures, including 3DIC and die-to-die connectivity, are necessary to facilitate the latest requirements.

Susheel Tadikonda: The HPC architectural landscape is going through a seismic shift and the driving factors for this change are evolving (AI) workloads, flexible computing (CPU, GPU, FPGA, DPU, etc.), cost, memory, and IO throughput. Progress at a microarchitectural level includes faster interconnections, higher computing densities, scalable storage, greater efficiencies in infrastructure, eco-friendliness, space management, and improved security. From a system perspective, next-generation HPC architectures will see an explosion of disaggregated architectures (decoupling memory from processors and accelerators) and heterogenous systems, where different specialized processing architectures (FPGA, GPU, CPU, etc.) are integrated in a single node allowing flexible switching between modules at a fine-grained scale. A key recipe to achieve this kind of integrated system is the use of “chiplets.” Such complex systems pose a big verification challenge, especially from IP/node level verification in system context, dynamic hardware-software orchestration, workload-based performance, and power, etc. This will require a push for novel hardware-software verification approaches.

Scott Durrant: One of the challenges that system managers face today is that moving data around takes a lot of power and time (both of which are in limited supply). Moving the processing closer to the data to reduce the amount of data movement that takes place will be a trend that we’ll see accelerate in 2022. Along with that is the need to continue scaling resources. One of the mechanisms that I think we’ll see really advance in the coming year is the utilization of advanced packaging and die-to-die interfaces to support higher performance devices, that is the scaling of processing capabilities within a device by use of multiple die.

Ruben Molina: In addition to reducing latency by moving data closer to the processing elements, multi-die integration also allows compute power to scale by combining multiple-die in a single package without the cost of bleeding edge process technologies. To make this happen, designers need the ability to floor plan, route, and analyze the timing and power of multiple chips inside the package. One other method for scaling compute power is the customization of compute architectures for specific tasks. Companies are already starting to do this for network processors and graphics applications, but it takes a lot of upfront architectural exploration to get it right in the RTL and that is putting a lot of focus on tools that can enable these tradeoffs early in the design cycle.

Scott Knowlton: We’re also seeing disaggregation of the architectures. Architectures like 3DIC are becoming key to enabling designers to put different dies and packages together to handle specific computational paths. So now they can design packages using 3DIC and die-to-die connectivity, and then extrapolate that out from that particular component into the machines where we are seeing disaggregation of the memory systems. This gives us unique opportunities for different types of designs and architectures to handle specific workflow tasks.


Scott Durrant - Synopsys

Scott Durrant is the cloud computing and HPC strategic marketing manager for the solutions group at Synopsys, responsible for driving cloud and HPC market IP initiatives, including strategic business and market trend analysis. Prior to Synopsys, Scott worked for Intel, NAVEX Global, and McAfee, where he held product management, product and technical marketing, software and systems engineering, and strategic planning roles for enterprise data center and cloud products and services, including networking and server IP, chip, and system-level products. Scott holds a B.S. in electrical engineering and an MBA from Brigham Young University.

Susheel_Tadikonda- Synopsys

Susheel Tadikonda is vice president of engineering in the systems design group at Synopsys. His team is responsible for creating validation solutions across the platforms (simulation/emulation/prototyping) for industry segments like 5G, networking, storage, automotive, and AI. Prior to joining Synopsys, Susheel worked at Intel, Cisco, AMCC, and a host of startup companies where he held leadership roles in chip design, verification, and system validation. His technical areas of interest include architecture, design and verification of complex, multi-core processors, SoCs, network processors, switches, and fabric; emulation/prototyping solutions; systems hardware; and software bring up.

Ruben Molina - Synopsys

Ruben Molina is a strategic programs director in the silicon realization group at Synopsys. Prior to joining Synopsys, he held marketing director positions at Cadence, Magma Design Automation, and Extreme DA, where he was the responsible for directing business development and product marketing for static timing analysis and parasitic extraction products. He also has held senior management and technical positions at LSI Logic in the areas of design methodology, and spent several years as an IC designer for Hughes Aircraft, Radar Systems Group. Molina holds a BS in engineering and MSEE from California State University, Los Angeles. He is the co-author of seven U.S. patents.

Scott Knowlton - Synopsys

Scott Knowlton is director of the segment marketing team in the solutions group at Synopsys. He began his IP career at Synopsys in 1997 and over the last 25 years, has seen tremendous changes in the semiconductor industry’s perception and adoption of IP as product marketing for CXL, PCI Express, CCIX, SATA and other IP in the DesignWare IP portfolio.  Prior to joining Synopsys, he held positions at Cadence, Nebula Systems, Encore Computer and Raytheon as a chip designer. He is the co-chair for the marketing work group for PCI-SIG and was the marketing chair for the CCIX marketing workgroup. Scott received a B.S. in electrical engineering from the University of Michigan.


Related Content:

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.