Power debugging: how it works and how you can benefit - Embedded.com

Power debugging: how it works and how you can benefit

In this Product How-To, Shawn Prestridge describes the concept of power debugging and how it is used in IAR’s Workbench to evaluate the power profile of the Cortex M3/4 architecture.

As technology becomes more pervasive in our lives, we begin to have a love/hate relationship with our portable electronic devices. On the one hand, we love the portability and extensibility that mobile devices bring to our lives, but we abhor the fact that these cherished embodiments of productivity keep us constantly searching for power outlets during the day in order to keep them alive and us connected.

In the past, only the hardware engineers of these devices had much input that would help curb their voracious energy requirements; today, however, we are seeing more of this capability returned to the software engineer as they struggle to formulate ideas as to how to lower the power footprint of their designs. This is the crux of power debugging: giving the software engineer the ability to see how their coding decisions impact the power profile of the device.

Power debugging gives the software engineer the ability to see how much power their board is consuming and marry that information to the source code for the application, thus providing the engineer with the contextual clues necessary to be able to determine where most of the power in their application is being consumed.

By doing statistical power profiling of your application, you can determine the minimum, maximum and average power consumption of the device which can assist in hardware design so that you know the correct capacitor sizes to use if you require bulk caps in your design, the desired mAh ratings for your batteries in order to achieve a desired duration between charges, etc.

At first blush, it may seem that only battery-driven applications can benefit from this technology but in reality, virtually every design could be assisted by this endeavor.

As our natural resources become more and more scarce, it behooves the responsible engineer to design their products in such a way that it consumes as little power as possible, particularly when the tools are readily available to help them make minor changes in their creations to minimize the carbon footprint.

There are two basic types of measurements that can be done in Power debugging: board-level and chip-level measurements. Board-level is very easy to achieve by simply measuring the amount of power being supplied to the development or production board and therefore includes all of the peripherals that are on the board.

This is simultaneously a boon and a bane because your power reading can be distorted by non-relevant components and the response to change in power is slow due to the effect of voltage regulators and bulk capacitors on the board.

Conversely, the chip-level measurement has very fast response time so that you can easily see how code changes can affect the MCU’s power consumption. However, it can be difficult to gain access to the Vdd pins of the MCU due to the fineness of the trace pitch on the part. Moreover, you will need more channels on your power debugging tool to measure the effects of all the other system components. Measuring the power has an equivalent circuit as shown in Figure 1 below.

Figure 1. Power measurement equivalent circuit
If the clock frequency of the MCU is 100MHz, then the time per instruction is 10ns, so even if the value of R is 1 Ohm, then the capacitance required to measure the power at each individual instruction would be 10nF which is unfortunately not realistic since most manufacturers specify the capacitance at the Vdd pins to be approximately 10uF which gives a corresponding time measurement of 10us (100kHz measurement of power).

Moreover, rise-and-fall times on this circuit follow the 90%/10% rule, so this will mean that the time required to measure the power will be about twice 10us stated previously, thus giving a measurement frequency of 50kHz which should be realizable.

Optimizing code to minimize the power consumption is highly akin to optimizing for speed – the faster a task is executed, then the more time the MCU can spend in a low-power state waiting to perform the next task.

It is not uncommon for MCUs to have standby power consumption in the uA range, so optimizing the code for speed can extend the battery life of an application by orders of magnitude by allowing the MCU to spend most of its time in a low-power state.

It is possible, however, that going into a low-power state causes the MCU to consume more power than if it was left running in a normal power mode. Consider the following case: you turn the USB peripheral on to do a bulk transfer and then immediately turn it off to save power.

At first blush, this may seem to be a good general strategy to minimize your power footprint but if your processor is being constantly interrupted to perform another transfer, you may actually consume more power by powering up the USB constantly and waiting for it to stabilize and enumerate rather than just leaving it in an operational state all of the time. By using Power debugging, you can test different coding strategies to see which ones reduce your power consumption.

In general, identifying unnecessary power consumption in an application can be difficult. Occasionally, there may be a simple bug that is causing excessive power drain within a device. Most hardware has unused GPIO pins tied to ground, but if a software glitch causes one of said pins to be a 1, then the power drain can be as high as 25mA.

By analyzing the power timeline of your application, you can narrow down in the source code where the power spike begins to occur. Oftentimes, minimizing power takes subtler forms of identifying places in the code where you can tune how the hardware is utilized in order to make it more efficient.

A good example of this is using DMA vs. polled I/O since many modern architectures give you the ability to put the MCU to sleep while a DMA transfer is taking place, thus saving quite a bit of power that would be consumed staying in a tight polling loop waiting for I/O on a pin. Power debugging gives the software engineer the ability to quantify how much power is being saved by choosing one technique over another.

The power consumed by an MCU is theoretically given by the formula P=fkU², where f represents the clock frequency of the MCU, U the supply voltage and k a constant. By this formula, the power consumption has a linear relationship with the clock frequency of the MCU so reducing the frequency should give a corresponding reduction in the power consumption of the application.

However, increasing the frequency of the part with increase the time spent in a low-power mode since the device will be able to complete its task more quickly. Power debugging gives you the ability to play with the clock frequency settings of the MCU to see which ones minimize your power footprint.

The way that Power debugging in the Embedded Workbench works is slightly different for sundry architectures. For example, consider a Cortex M3/4 architecture. The CoreSight resources it contains look something like what is show in Figure 2 below .

Figure 2. Cortex M3/4 architecture CoreSight resources
The DWT (Data Watchpoint and Trace unit) allows the J-Link hardware debugger to sample the Program Counter of the core around 5,000 times per second and triggers an ITM (Instrumentation Trace Macrocell unit) packet which formats and timestamps the information.

When the J-Link receives this ITM packet, the instantaneous current is measured using an ADC and is transferred to the EW’s C-Spy software debugger and is logged in two different formats, one textual-based so that the information can be exported and plotted by another tool and the other by a timeline view that allows you to see the power spikes graphically as shown in Figure 3 below.

Figure 3. Timeline view of processor power spikes.
Double-clicking a spike or log measurement takes you to the spot in source code where the sample was measured so that you know exactly what was happening when the power sample was taken. Additionally, the Embedded Workbench allows you to set power breakpoints which halt execution on the MCU when the power exceeds a certain threshold.

Additionally, you can filter the power data based on a threshold value in such a way that you log all data either above or below that value. As you can see, the Embedded Workbench gives the engineer many ways to harness their power information.

As the desire to become green becomes more palpable amongst the world’s people, so shall we also see more corresponding interest in more ecologically-sound designs from engineers, particularly as it concerns the carbon footprint of their products.

Power debugging gives engineers the ability to see how their code decisions impact the power profile of their creation and gives them contextual clues on how to alleviate excessive power consumption by locating problem spots within code and allowing the engineer to try out different design strategies.

Shawn Prestridge has served as IAR System's Senior Field Applications Engineer since 2008. Shawn has worked in the software industry since 1993 and prior to joining IAR Systems he held the position of Embedded Hardware/Software Engineer with Texas Instruments as well as doing Embedded Development as the owner of Ministry of Software. Shawn’s research interests are primarily focused in Cryptology and he specializes in Large Number Theory, Quantum Cryptography, Elliptic Curve Cryptography, Number Field Sieve Computing and Communication Encryption. Shawn’s degree work includes a BS in Electrical Engineering, a BS in Mathematics, an MS in Electrical Engineering, an MS in Software Engineering and a PhD in Electrical Engineering specializing in Quantum Cryptography, all with Southern Methodist University in Dallas.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.