Low Power - Not High Performance - Rules In Net-Centric Designs - Embedded.com

Low Power — Not High Performance — Rules In Net-Centric Designs

Never before have you had as many choices on how to reduce power consumption and dissipation in your small footprint designs.

But on the other side of the equation, never has there been as much need. Other than on the desktop and in the server environment, the thrust of most designs has been focused on creating a variety of small footprint, net-centric embedded devices and information appliances that need to operate on as little power as possible, but without sacrificing performance.

Unfortunately, semiconductor process specialists and circuit logic expects are only now catching on. At the most recent ISSCC, developers of many of the advanced microprocessor designs expressed concern that while they could achieve clock rates into the multi-gigahertz range, the chips would be hot enough to fry an egg on.

At the logic and functional block level, processor designers and software developers have been spending as much time optimizing their designs for the best possible power consumption and dissipation as they have been getting the best performance possible. Maybe even more.

And at recent meetings, such as the International Conference on Compilers, Architecture, and Synthesis For Embedded Systems, a number of the presentations focused on the power aspects of modern CPU design and implementation, as well as compiler and tool strategies to optimize them.

At the architectural level, for example, designers at Motorola's M*Core Technology Center, have developed a programmable unified cache architecture for the M340 that incorporates a number of features focused on power savings, including fine grained control of write mode selection, way management and buffer enabling/disabling options. ST Microelectronics engineers have developed special compiler optimizations for the ST120 that allow significant reductions in power without affecting performance. The optimizations involve the use of techniques such as loop unrolling, pipelining with modulo expansion, and branch straightening. At Conexant Systems engineers have developed a set of tools that allow users to look at almost any aspect of a design to modify either the architecture or application code for most power efficient performance.

Indeed, many of the papers at recent system and component level conferences have focused on power issues. Similarly at the at the process and transistor level, companies are going back to the drawing board to look at how things can be changed to squeeze more and more performance out of the silicon, but with less and less power consumption. It seems to me that they are reaching into the grab bag of technology developments that have been tried in the past and found wanting — mostly because market conditions did not favor them — and looking to see if they might be useful now.

Most recently, Intel has again gone back and pulled out one, silicon-on-insulator (SOI), which it had derided in the late '70s and early '80s as unnecessary, expensive and impractical. IBM and AMD have already beat them to the starting line. IBM has been commercializing it for a number of years and AMD is already moving closer to production using SOI, having just placed a multimillion-dollar order with wafer manufacturer Soitec.

At the 2001 International Electron Devices Meeting recently, Intel revealed its plans to develop fabrication techniques based on SOI, using it to create a new high-K insulating layer to replace traditional silicon dioxide, achieving a leakage rate that is 10,000 times lower than previously. Researchers at the conference claimed that this approach would allow fabrication of transistors that switch data as much as 25 times faster than present technologies with no increase in power consumption.

And I believe in the tooth fairy.

The basic problem they and all researchers at the silicon level have is that has they have made transistors smaller, and thus faster, they actually reduce the current available to move between them, making it harder to push electrons through the gates. With SOI techniques they have been able to lower resistance, and thus power, by about 30 percent, enlarging the source and drain in the vertical direction, expanding them to rise above the surface of the chip.

Intel seems confident that it's SOI solution will give it all of the power savings it will need without interrupting its drive toward higher performance for at least one or two generations of processor design.

I have heard such projections before. They all remind me of the automobile industry, which has resisted all calls to shift from the problematic, but well-understood, internal combustion engine in the face of declining, uncertain, and more expensive oil resources; increased pollution coming from its byproducts, and declining efficiency/cost tradeoffs.

For years, it has spent ever more to “improve” its current technology with such fixes as computerized engine controls but continued to resist the move to totally new fuel technologies, such as hydrogen-based, or engine paradigms other than the piston driven engine internal combustion engine.

It is only in recent years that the industry has accepted the inevitable, moving, however slowly, toward those engine and fuel technologies it has resisted for so many years; witness its increased funding of work on fuel cells, hydrogen burning engines, and hybrid battery/internal combustion engine combos.

Looking at what is going on right now in the computer industry, I cannot help but come to the conclusion that we are getting close to that same point of transition. Intel seems to believe that SOI will take the silicon engine a bit farther along. But how far and at what cost? What is the cost of the insulating technology that is now being proposed and what will be the next stopgap: another “old” technology from the past, such as silicon on sapphire or silicon on diamond? Or perhaps some exotic, artificial insulating material?

I don't know what to think. Maybe we are taking that old dictum too seriously: “Better the evil you know than the one you don't.” Many engineers I've canvassed admit that what we have been using is running out of steam. But it is established technology with documented problems and known solutions.

But soon I think our options are running out. So, before we hit the wall of physical and fiscal limitations maybe we had better start thinking about changing the underlying engine technology by which current integrated circuits operate. We need to shift from the fabrication equivalent of the piston-based internal combustion engine to a more advanced combustion engine or one that operates on entirely different principles.

Based on conversations with some of you I have some ideas about the nature of that new engine. But what about you? When will we hit the wall? And when do we begin thinking about a new engine rather than tinkering with the old one? How far will the existing silicon engine technology take us? What are some of the alternatives?

Bernard Cole is the managing editor for embedded design and net-centric computing at EE Times. He welcomes contact. You can reach him at or 520-525-9087.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.