Advertisement

Software-driven power analysis

July 10, 2019

jstahl-July 10, 2019

Power tends to cost; high power costs highly. This rather forced adaptation of Lord Acton’s famous quote captures two important aspects of semiconductor design and power consumption. Looking at average power consumption over time, it is clear that a chip with high power draw will incur high costs. In portable devices, more power means either larger and more expensive batteries, or shortened battery life. Further, more power means more advanced and more expensive packaging to dissipate the resulting heat. These three factors also have ripple-effect costs in terms of product pricing, profit margins, and likelihood of success in the market.

Concerns over power consumption extend well beyond portable devices that run at least part of the time on batteries. Wall-powered devices also incur extra costs in terms of packaging, power supplies, and power distribution systems. These same issues extend all the way to server farms, with their racks or compute servers, massive data storage arrays, and network switches. The operational costs for server farms are enormous; studies have shown that the bills for power exceed the price of the hardware itself over the lifetime of each server. Server farms may be located near hydroelectric dams or massive solar arrays in an attempt to meet their high demands. Some locations must also meet “green laws” that regulate server power draw.

At the high end, excessive power consumption may require liquid cooling systems that add enormous infrastructure and associated costs. For all these reasons, reducing average power consumption is a goal in nearly all semiconductor projects, regardless of the end market. When considering peak power, reduction may be a critical need rather than just a goal. Some chips are designed so that only certain portions may be running at the same time. In such cases, turning on all functionality may require more current draw than the device can handle, resulting in thermal breakdown and permanent damage.

Challenges of power analysis

Given all the motivation to limit power consumption, the industry has developed a wide variety of low-power design techniques. These range from layout-level circuit tweaks to system-level, application-aware, software-based power management. Whatever techniques are used, it is very valuable to be able to accurately assess their impact by estimating both average and peak power consumption during the design and verification of the chip under development. It is unacceptable to wait until after fabrication to find that the average power is too high for a viable product or that the peak power draw destroys the chip. Effective pre-silicon power analysis, preferably at multiple stages in the project, is required.

The electronic-design-automation industry’s traditional approach to power analysis relies on simulation. Functional verification of the chip entails developing a testbench and then writing or generating a suite of tests that check each function or feature of the chip design. It is a relatively simple matter to simulate the entire test suite, or perhaps only a representative portion, and feed the results into a traditional power signoff tool. Since most power consumption occurs only when circuits switch state, the simulator can provide a switching activity file to a power signoff tool. When combined with the power characteristics in the library for the target technology, the tool can provide a fairly accurate estimate for both average and peak power consumption.

This accuracy, however, is entirely relative to the tests that are run in simulation. In practice, any verification test suite is not representative of chip operation with production software running. Tests designed for functional verification, by intent, focus on stimulating only those areas of the design needed for the targeted feature. Constrained-random testbenches can generate more parallel activity but are still unlikely to model real-world usage. Truly accurate power analysis can be performed only by using the switching activity from real software workloads, including user applications running on top of an operating system (OS).

It typically takes a few billion clock cycles to boot an OS, start system services, and run applications. This would be completely unpractical to run in simulation. In contrast, emulators routinely run billions of cycles, from OS boot to multiple user applications running in parallel. Emulation exercises just the sort of real software workloads needed to perform high-accuracy power analysis. The challenge is that power signoff tools are designed to handle thousands of cycles, not millions, and most certainly not billions. A new methodology is required to identify a few areas of high activity in the emulation run and focus on using only these windows for power analysis (Figure 1).

click for larger image

Figure 1. Power analysis using power windows (Source: Synopsys)

Moving to software-driven power analysis

The first requirement for the flow shown in Figure 1 is for the emulator to produce a profile showing which parts of the design are active over time. This activity profile can be viewed as a graph within a waveform viewer or other hardware debug tool. Since power signoff cannot be performed on billions of cycles, the next step is for users to leverage the activity profile to identify one or more power critical windows during which activity is the highest and power consumption is likely to be the highest as well. If each of these windows is in the millions of cycles, it can be used for the next stage of power analysis. As a benchmark, the emulator should be able to produce an activity profile for one billion cycles of software workload in three hours.

Continue reading on page two, "Generating activity profiles" >>

< Previous
Page 1 of 2
Next >

Loading comments...

Most Commented