Succumbing to a bit of techie lust I recently bought a new laptop, a Sony Vaio that uses a 1.7 GHz Intel Centrino chipset.
Now I can’t wait for winter.
The bottom of the machine warms up like an old-fashioned vacuum tube… and then continues to heat… until it glows a faint cherry red!
That’s just a bit of an exaggeration, but the heat radiated by this machine is nothing short of astonishing. Curious, I hunted around Intel’s web site and found the Centrino databook at an FTP site.
The processor sucks up to 26 amps at a bit over 1 volt. Worse, power requirements fluctuate wildly over the course of microseconds, turning the power plane into an RF circuit if not designed with excruciating care. Nearly 10% of the data book’s 372 pages are devoted to practical advice about getting power to the chip.
The company’s Xeon CPU gets fabulous performance by prodigious power consumption. It can require 120 amps just for the processor chip!
That’s practically enough to start a car. Or to arc weld thin metals.
EDN’s May 13, 2004 issue contains a graph that shows Pentium power densities within an order of magnitude of that of the core of a nuclear reactor!
Companies running racks full of Pentium servers have a compounding problem. According to Hewlett Packard (HP) a big server room can consume $1.2 million in electricity every year. With energy, particularly oil, becoming scarcer and more problematic (I highly recommend the book “Winning the Oil Endgame” by Amory B. Lovins, et. Al. for practical solutions to this problem) it seems rather wasteful to turn so much power mostly into heat.
Raw compute horsepower provided by high clock rates and billions of transistors speculatively executing deeply pipelined instructions buried in multi-level cache memories inevitably (today, at least) requires lots of watts.
Old timers will remember the days when CMOS logic was nearly a joke, suitable only for low-speed micropower applications running at funny voltages. But increasing power densities doomed bipolar logic; ironically today CMOS rules the digital world as the heat generated by a billion bipolar transistors would initiate a fusion reaction.
Once upon a time an embedded system was an app using a low-end processor. Today we stuff Linux and Windows into devices that are clearly embedded. The distinction between the desktop and embedded is somewhat more blurred than of yore. But perhaps low power consumption is one distinguishing property of most embedded applications.
Cell phones which contain a pair of 32 bitters run for a week or more on a single battery charge. Miserly instruments monitor environmental parameters for years on a pair of AAs. Mars rovers operate motors and computers from feeble solar energy.
What about your products? Do they suck power from a fire hose or sip microwatts?
Jack G. Ganssle is a lecturer and consultant on embedded development issues. He conducts seminars on embedded systems and helps companies with their embedded challenges. Contact him at . His website is .
I stopped buying flash lights that use D cell batteries and old school light bulbs… very inefficient! I now own various LED flash lights. They can be left on 24/7 for 30+ days and still shine bright.
– Steve King
The problem here is the “throw more hardware at it” mentality that so common these days, often driven it seems by the awkward but apparantly necessary “time to market” brigade.
What we should be hearing is how to make do with enough – and by extension, how to develop optimised, efficient code for our apps. Most embedded folks get that, I think.
It has never been more important for developers to write efficient applications – often a trade off with performance. Why bother when you get to embed the latest hyperthreading, multi-cored power monster from our pals at Intel…?
Well, we're going to have to, now and in the future. For example, the mobile devices/products market is constrained almost in it's entirety by power consumption. It affects every aspect of design.You have to code with that in mind.
Get it right, do more with less, and maybe embedded nuclear reactors will remain in the realm of fiction…
I may be showing my age, but I remember reading about an IBM prediction of the future computer: “a smoking hairy golf-ball”. It would be smoking because of all the heat it dissipated, hairy because of all the I/O, and a golf-ball to keep propagation delays low.
I think they “got it in one”.
– Fred Williams
We have clearly pushed Si based electronics near its limits — density, speed and power consumption. Hopefully in a few years we will see new technology emerge to replace transistors much the same way vacuum tubes were replaced some four decades ago. That will be a once-in-our-lifetime excitement.
– David Liu
What we're seeing with our products is customers who want it all – low power AND high clock speeds. This can be difficult to achieve and often involves comprimises, especially under environmental extremes.
What makes things difficult is, our customers like to design their systems based on absolute maximum ratings. So say the a/c system quits in the equipment room – our boards have to continue to operate at the ridiculous ambient temperature of 60 degrees C or higher. If you take a board that usually uses 50 W of power at 25 C ambient, and crank the ambient temperature up to 60 C, that 50W power number can nearly double. It's simply hard to keep chips cool at 60 C ambient without resorting to extreme cooling measures. I predict we'll see an increasing emphasis on advanced cooling of these high-power chips in the future to keep these absolute maximum power ratings down.
Clock-rate escalation, the war that drove Intel and AMD to nuclear power densities, is an easy way to get performance. Consequently, many embedded systems developers tend to say, “I just need a procesor that's X percent faster than last time.” There are ways to reduce power dissipation in embedded systems, but upwardly creeping clock rates isn't one of them. In general, wider, more parallel execution hardware drops clock rates, operating voltages, power dissipation, and energy use. Smart exploitation of Moore's Law extends battery life.
– Steve Leibson
Old timers also remember games that had a single processor providing NTSC video output, user inputs (with realistic reaction times), and running the game strategy itself.
I think the real problem is “tools” like C++ and Java that, while good at conceptualizing stuff and helping to prevent certain types of bugs, have nevertheless killed code efficiency to the point that it is necessary to upgrade the instruction rate merely to sustain the same throughput as before.
Computer Science courses are based on these resource pigs, with nary a mention of what it takes inside the runtime environment to support them. There needs to be a re-focus of CS curricula on those low-level skills that the creators of Pac Man possessed.
– Andy Kunz
One thing to add is that current processors also have a lot of power conservation features. Do you ever notice new PC's that crank up the fan while compiling, but kick it back down when idle?
Just because it can draw 26 amps doesn't mean it does when it doesn't need it, as you allude to.
Back in the old days, processors drew the same amount of current while idling as they did under full load.
I don't have data to back this up, but I'll bet the power per useful work ratio of computers has gone down dramatically as time has moved on.
Of course, if you don't define compressing and video files as useful work, you may not agree 😉
Not many people were aware the rolling brown-outs of California were mostly necessitated by the overwhelming number of server clusters requiring a large portion of manufactured power. If only people could wise-up, and realize that the Intel/Microsoft way is not the only way, and is often the wrong choice! By simply switching to Apple server equipment, power requirements could decrease by as much as 70% for those companies who spend millions more than they need to power the things and to pay the salaries of the legion of technicians who sit around and wait for Microsoft software to run afoul.
– Matthew Staben
I wonder how the x86 gear compares to the IBM/Moto PowerPC chips? I remember reading somewhere that a G4 CPU consumed 1/4 of the real estate and 1/10 of the power of a comparable Pentium, and did twice the work per CPU cycle.
Perhaps some more enlightened chip design is called for. Somewhere I have a paper from Motorola describing the development of the 68K series: They started from scratch and didn't worry about any backwards compatibility with the 6809 and 6800.
For Intel, I think you were supposed to be able to run 8008 code on all subsequent Intel processors: Backwards compatibility at all costs, and the code has spread all over the world, so we can't go back now…
– Larry Gadallah