Whether you are creating an operating system, firmware, or even device drivers, the way you write the software could affect the power consumption of the resulting product. Here are four approaches to minimizing power consumption through software.
More than a century ago, American civil engineer Arthur Wellington coined a pithy, tongue-in-cheek definition of our profession that still rings true today: “Engineering is the art of doing that well with one dollar, which any bungler can do with two….” In other words, engineering is the pursuit of balance between quality and efficiency.
As embedded software engineers, we need to strike that balance between quality and efficiency. To do so, we optimize our software's performance so that it can run on slower, less-expensive processors. We trim our software's memory footprint so we can use smaller, less-expensive memories. And increasingly, with many of us now writing software for handheld and wireless devices, we optimize our software's power consumption to extend the life of small, cheap power sources.
The good news is that-whether you're creating the operating system, peripheral drivers, or application firmware-a variety of software design techniques can reduce power consumption. In this article, we'll focus on four of these.
Many of the latest embedded processors include run-time power modes that can be used to scale power consumption. The most common of these is idle mode, in which the instruction-executing portion of the processor core shuts down while all peripherals and interrupts remain powered and active. Idle mode consumes substantially less power than when the processor is actively executing instructions.
A key aspect of idle mode is that it requires little overhead to enter and exit, usually allowing it to be applied many times every millisecond. Any time the operating system detects that all threads are blocked-waiting on an interrupt, event, or timeout-it should place the processor into idle mode to conserve power. Since any interrupt can wake the processor from idle mode, use of this mode enables software to intelligently wait for events in the system. For maximum power efficiency, however, this tool requires that we design our software carefully.
We have all written code that polls a status register and waits until a flag is set. Perhaps we're checking a FIFO status flag in a serial port to see if data has been received. Or maybe we're monitoring a dual-ported memory location to see if another processor or device in the system has written a variable, giving us control of a shared resource. While seemingly benign, polling a register in a loop represents a missed opportunity to extend battery life on handhelds.
The better solution is to use an external interrupt to signal when the status flag has changed. In a single-threaded software environment, you can then invoke the processor's idle mode to reduce power consumption until the actual event occurrs. When the interrupt occurs, the processor automatically wakes up and continues executing your code.
Idle mode can even be used in cases where the event cannot be directly tied to an external interrupt. In these situations, using a system timer to periodically wake the processor is still preferable to polling. For instance, if you are waiting for an event and know you can process it quickly enough as long as you check its status every millisecond, enable a 1ms timer and place the processor into idle mode. Check the event's status every time the interrupt fires; if the status hasn't changed, you can return to idle mode immediately.
This type of waiting mechanism is very common. The vast majority of today's PDAs and smart phones are powered by processors and operating systems that have idle-mode capabilities. In fact, most of these devices hop into and out of idle many times per second, awakened whenever a touchscreen tap, keypress, or timeout occurs.
Another technique to consider is event reduction. Whereas intelligent waiting enables the processor to enter its idle mode as often as possible, event reduction attempts to keep the processor in idle as long as possible. It is implemented by analyzing your code and system requirements to determine if you can alter the way you process interrupts.
For example, if you are working with a multitasking operating system that uses time-slicing to schedule threads, the operating system will typically set a timer interrupt to occur at the slice interval, which is often as small as 1ms. Assuming your code makes good use of intelligent waiting techniques, the operating system will frequently find opportunities to place the processor into idle mode, where it stays until it's awakened by an interrupt. Of course, in this scenario, the interrupt most likely to awaken the processor is the timer interrupt itself. Even if all other threads are blocked-pending other interrupts, pending internal events, pending long delays-the timer interrupt will wake the processor from idle mode 1,000 times every second to run the scheduler.
Even if the scheduler determines that all threads are blocked and quickly returns the processor to idle mode, this frequent operation can waste considerable power. In these situations, the time-slice interrupt should be disabled when idle mode is entered, waking only when another interrupt occurs.
Of course, it is usually inappropriate to disable the time-slice interrupt altogether. While most blocked threads may be waiting-directly or indirectly-on external interrupts, some may have yielded to the operating system for a specific time period. A driver, for instance, might sleep for 500ms while waiting for a peripheral. In this case, completely disabling the system timer on idle might mean the thread doesn't resume execution on time.
Ideally, your operating system should be able to set variable timeouts for its scheduler. The operating system knows whether each thread is waiting indefinitely for an external or internal event or is scheduled to run again at a specific time. The operating system can then calculate when the first thread is scheduled to run and set the timer to fire accordingly before placing the processor in idle mode. Variable timeouts do not impose a significant burden on the scheduler and can save both power and processing time.
But variable scheduling timeouts are just one means of reducing events. Direct memory access (DMA) allows the processor to remain in idle mode for significant periods even while data is being sent to or received from peripherals. DMA should be used in peripheral drivers whenever possible. The savings can be quite impressive.
For example, the receive FIFO on a serial port of Intel's StrongARM processor generates an interrupt for approximately every eight bytes that are received. At 115,200 bits-per-second, an 11KB burst of data sent to this port would cause the processor core to be interrupted-and possibly awakened from idle mode-almost 1,500 times in one second.
If you don't actually need to process data in these small, 8-byte chunks, the waste is tremendous. Ideally, DMA would be used with larger buffer sizes, causing interrupts to occur at a much more manageable level-perhaps 10 or 100 times per second-allowing the processor to idle in between. Using DMA for such activities has been shown to reduce use by up to 20%, reducing CPU power consumption and increasing the amount of processor bandwidth available for other threads.
Dynamic clock and voltage adjustments represent the cutting edge of power reduction capabilities in microcontrollers. This advance in power management is based on the following observation: the energy consumed by a processor is directly proportional to the clock frequency driving it and to the square of the voltage applied to its core.
Processors that allow dynamic reductions in clock speed provide a first step toward power savings: cut the clock speed in half and the power consumption drops proportionately. It's tricky, however, to implement effective strategies using this technique alone since the code being executed may take twice as long to complete. In that case, no power may be saved.
Dynamic voltage reduction is another story. An increasing number of processors allow voltage to be dropped in concert with a drop in processor clock speed, resulting in a power savings even in cases when a clock-speed reduction alone offers no advantage. In fact, as long as the processor does not saturate, the frequency and voltage can be continually reduced. In this way, work is still being completed, but the energy consumed is lower overall.
Even these approaches can be improved upon by considering that not all threads are equally productive
consumers of processor bandwidth. Threads that are efficient users of processor bandwidth will take longer to complete as the processor's clock speed is dropped; these threads make use of every cycle allocated to them. I/O-bound threads, on the other hand, use all the processor cycles allocated to them, but take the same amount of time to complete even as the clock speed of the processor drops.
As an example, consider the PC Card (formerly PCMCIA) interfaces used by many PDAs. When data is written to a flash memory card, the bottleneck in the system is not the speed of the processor but the physical bus interface and the time the card's firmware takes to erase and reprogram flash.
Ideally, the intelligent waiting techniques discussed above would be used to minimize power consumption in this case, but wait times are often highly variable and much smaller than the operating system's time quantum. As a result, intelligent waiting would injure performance, so these drivers often resort to polling status registers; reducing clock speed in these cases would conserve power, but it would have negligible impact on the time required to write data to the card (most of the processor cycles used are used for polling).
The challenge, of course, is knowing when it's possible to decrease clock frequency and voltage without noticeably affecting performance. As a software developer, it's unwieldy to have to consider when it's appropriate to drop clock speed in your driver and application code, and this technique becomes even trickier in multitasking environments.
So far, we've discussed only what to do when the device is running; now, let's consider what happens when it is turned off. Most of us take for granted that we can turn on our PDA and have it pick right up where it was when we last used it; if we were in the middle of entering a new contact, that's where we'll be when we turn it back on a week or month later. This is accomplished with an intelligent shutdown procedure that effectively tricks any executing application software into thinking the device was never turned off at all.
When the user turns the device off by pressing the power button, an interrupt signals the operating system to begin a graceful shutdown that includes saving the context of lowest-level registers in the system. The operating system does not actually shut programs down, but leaves their contents (code, stack, heap, static data) in memory. It then puts the processor into sleep mode, which turns off the processor core and peripherals but continues to power important internal peripherals, such as the real-time clock. In addition, battery-backed DRAM is kept in a self-refresh state during sleep mode, allowing its contents to remain intact.
When the power button is pressed again, an interrupt signals the processor to wake up. The wakeup ISR uses a checksum procedure to verify that the contents of the DRAM are still intact before restoring the internal state of the processor. Since DRAM should contain the same data as when powerdown occurred, the operating system can return directly to the thread that was running when the device was powered off. As far as the running application is concerned, it never even knew something happened.
This approach saves power primarily because it avoids the processor-intensive and time-consuming task of rebooting. Rebooting a sophisticated device can take several seconds, during which time the system is loading drivers. This time is essentially wasted from the user's standpoint, since he/she cannot actually use the device during this time. When you think about the number of times that you turn off and turn on your battery-operated devices, an intelligent shutdown procedure makes a lot of sense, both for reducing power consumption and improving usability.
Another very important factor in intelligent shutdown is minimizing power consumption while in sleep mode. Since a battery-operated device may sit on a shelf overnight or over a weekend, and since some power is required to refresh DRAMs and parts of the processor's peripheral interface, the batteries actually lose some capacity during periods of sleep. Minimizing sleep mode power consumption can be the difference between a device that has to have its batteries recharged every day and one that can go weeks between recharges.
Minimizing sleep mode power consumption requires analyzing the hardware in your system and determining how to set it into the lowest-possible power state. Most battery-operated systems continue to power their general-purpose I/O pins during sleep mode. As inputs, these I/O pins can be used as interrupts to wake up the device; as outputs, they can be used to configure an external peripheral. Careful consideration of how these pins are configured can have a large effect on sleep mode power consumption.
For example, if you configure an I/O pin as an output, and it is pulled up to Vcc, programming the pin to a logic-0 in your shutdown setup will cause current to flow through the pull-up resistor in sleep mode. Additionally, if you program a pin as an input, and it is not connected to an output, it will float, which can result in spurious logic level transitions that increase power consumption. It's important to analyze these situations and configure your pins properly.
Powering the future
There are many techniques that you can use to reduce the power consumption of a battery-operated device, some simple and straightforward, others less so. Luckily, since power management in embedded devices is increasingly important, you can expect help in the future. Already, software technologies are available that help reduce power consumption.
Meanwhile, researchers are working on compilers that can optimize code to reduce power consumption. Eventually, you may be able to let your software development tools take care of some of these power management techniques automatically. esp
Nathan Tennies is the director of software engineering at InHand Electronics, a developer of low-power Windows CE and Linux platforms. Nathan studied electrical engineering at the University of Virginia, is a 20-year veteran of the computer industry, and can be reached at .