Maximize the battery life of your embedded platform
Editor's Note: In this paper originally presented at Design East 2012, the author looks at issues and techniques for squeezing maximum energy from batteries in embedded systems in the following parts:
- In this part, the author reviews key methods for power reduction and addresses the nature of efficiency in embedded systems
- In Part 2 looks at the energy cost of memory access and power-reduction methods for memory access
- Part 3 continues the discussion with an examination of the role of computational efficiency in extending battery life
I have long been of the opinion that battery life and management of energy consumption is becoming one of the defining problems in embedded system development. I first gave a talk on this subject at the ARM TechCon event in Santa Clara in 2009. Trawling the web produced some academic research from the previous ten years and a few articles. In researching for this updated session, one of the most striking observations was how much more interest there is in this subject. Not a week goes by without at least one or two papers published on power-efficient chip design, energy-efficient software design, battery technology and so on.
As portable embedded devices become more and more powerful and more and more capable, the need to be frugal with energy is becoming more and more important. As well as all the other functionality, we now expect our smartphones to act as WiFi hotspots, portable data projectors, HD video players, high definition games consoles with stereo sound and the list goes on. All of these consume precious energy, stored in our precious battery. And that battery isn’t getting any bigger. With the form factor of the mobile phone constrained by the envelope of the standard shirt pocket, there is no room for it to grow.
The chip and board design community have been working for years towards power-efficient design techniques and synthesis tools are evolving very quickly with things like architectural clock gating, state retention power gating, dynamic voltage and frequency scaling and the like. But all this comes to naught unless the software systems which run on these platforms take advantage of the facilities offered by the hardware.
Given the emphasis on battery life for portable devices, it would seem strange that there are very few software engineers who actually have energy reduction in their daily project accountabilities. I suspect that those who do give the subject some thought are likely to do it on a “commendation vs. court martial” basis. We are entering a period when this will have to change. As battery life and performance requirements continue to fight with each other we, as software engineers, need to spend a lot more time looking at how we can design and write our software in an energy-efficient way.
As engineers, we all love finding geeky solutions to the problems which we come across. It may come as a surprise to find that, in this particular area, there are none. Clever tricks may save some power, but the field is dominated by other, simpler considerations. There are several very large elephants in this room and we must be careful to hunt the elephants we can see, before spending significant effort chasing smaller mammals.
I guess most of us know where the power goes. Silicon systems consume two kinds of power, in general.
Dynamic power is consumed when the system is running. This is the power used in switching logic elements from one state to another, driving I/O circuits, searching cache arrays and so on. It is clearly and obviously directly related to power supply voltage and operating frequency. In fact it is related to the square of the voltage, so the ability to reduce operating voltage is very useful indeed. Generally, the two go together as reducing operating frequency also allows a reduction in operating voltage, giving a double benefit when full processing power is not required.