Dry your eyes. Microprocessors have had a relatively long and useful life, but it's now time for them to step into a supervisory mode. As the industry adapts to changes in demand, reconfigurable logic will take over the lead. To read an opposing view, see Richard Belgard's response.
Semiconductors have been a great business: the equivalent of 14% compound annual growth rate worldwide for the last 40 years. No other industry matches that. The integrated circuit, Moore's law, and the personal computer have been a great beginning for the semiconductor industry. With the low-cost PC and low-cost transistor, however, we're now at the end of the beginning of semiconductor development.
The market is shifting from tethered (plugged into the wall for power) to untethered systems. As it shifts, the engineering goal changes from cost performance to cost-performance-per-watt. But today's microprocessors and digital signal processors cannot satisfy the combined performance and power requirements of untethered systems. ASICs are too expensive. Programmable logic devices are too slow and too expensive. Faster transistors burn less active power, but their leakage currents rise by an order of magnitude in two process shrinks—bad for untethered systems.
In other words, we can't reach the cost-performance-per-watt goals of untethered systems by just shrinking the components we have.
It seems a situation with no solution, but the industry is about to emerge from a 30-year stall in the improvement of design methods that was caused by our preoccupation with the microprocessor. The answer is reconfigurable systems.
The 10 years of progress brought us the microprocessor, which raised the engineer's productivity by giving up the efficiency of customization for the convenience of programmed implementation. In other words, microprocessors were cheaper and easier to use than ASICs. The first microprocessors were hidden (“embedded”) in systems where they mimicked the behavior of what would otherwise have been custom logic.
During these 10 years, microprocessor unit volumes have grown to billions of units per year. Problem solving was once done in hardware design, but the use of microprocessor-based design has become so common that programming is now synonymous with problem solving. The microprocessor, with its memory and peripheral chips, is the workhorse of systems.
From 1971 to 1981, 10 years of microprocessor progress made the personal computer possible. In recent years, PCs have been consuming almost 40% of all the dollar value of all semiconductor components. In other words, the PC sector makes up 40% of all the semiconductor business. The microprocessor has given our industry much success.
End of the beginning
Demand for PCs and leading-edge performance was so strong for so long that we came to believe that Moore's law created the industry's success. But Moore's law is just an aggressive supply curve. We forgot that demand is a force of its own. Demand has been so strong that we've assumed it will be infinite. It isn't, however. The law of supply and demand is at work and has lead to the “value PC.”
Figure 1: The path to the value PC
At its introduction, the PC's performance was woefully inadequate. In the made-up example of Figure 1, we assumed that the demand for performance was 10 times what the industry could supply in 1981 (year one on the chart). In the beginning, Moore's-law improvements in semiconductors doubled performance about every two years—the rate at which the supply of performance increased. Demand increased at its own rate. There was no correlation between the rate of improvement in the supply of performance and the rate of increase in the demand for performance. Early adopters of technology—members of the nerd community—became the leading edge of demand. Over time, demand spread as the market expanded. Late adopters didn't demand the leading-edge performance that early adopters did. After 20 years of Moore's-law progress, the PC's performance has become good enough to satisfy most of its users. Purchases have shifted from leading-edge PCs to value PCs , which offer good-enough performance at low prices.
The value PC has motivated the movement of engineering effort to untethered systems, which have a higher-profit margin than the value PC. This shift changes the engineering goal from cost performance to cost-performance-per-watt.
A similar supply-and-demand curve exists for transistors. Moore's-law progress supplies smaller, faster transistors. When the transistor was invented, it wasn't good enough for most applications. As with the PC, the transistor's applications grew and diversified. Demand spread between leading-edge applications (nerd toys) and trailing-edge applications (consumer appliances). Again, there was no correlation between the rate of improvement in transistors and the rate of increase in the demand for performance (particularly for trailing-edge applications). After 45 years of Moore's-law supply, most applications can get transistors that are good enough. The value transistor is a transistor that's good enough for the application. Over time, value transistors are available for more and more applications.
As transistors shrink, the active power they use decreases, while the leakage current increases. In two process generations, leakage current increased 10 times. As the number of transistors on a chip increases, fewer and fewer of those transistors are active at any moment, but they're all leaking. The result is that leakage currents represent an increasing part of the power budget. This is bad for untethered systems, which have long standby times and short bursts of activity. As Figure 2 shows, an application's best transistor is not necessarily the smallest transistor. At the left of the chart, active power dominates; at the right of the chart, leakage power dominates.
Figure 2: An application's best transistor isn't necessarily the smallest available transister
Value transistors, leakage currents, and increasing design costs decrease the incentive to continue shrinking transistors.
Chapter 2—the MPU
Before the computer, engineers solved problems with custom hardware. The computer introduced a new problem-solving method: common hardware and custom software. Programmed solutions displaced custom hardware where the microprocessor's performance was adequate. As the microprocessor improved, its range of applications increased. As application demands increased, the industry's answer has been to shrink transistors to make them faster. But the industry has stuck with building smaller transistors and with increasing microprocessors' performance beyond reason. For the same money we spend on heroic efforts to shrink transistors, we would achieve better results by other means.
The microprocessor uses predefined strings of bits (instructions) to configure itself on every clock cycle. This instruction-based processing mimics the behavior of what would otherwise be custom logic. It's not efficient enough for untethered applications, however. And we've been increasing the microprocessor's performance by increasing its clock rate, but doubling the clock rate doubles the power dissipation. Designers compensate by lowering the microprocessor's voltage. Halving the voltage lets the microprocessor run four times as fast for the same power. With microprocessor voltages falling below one volt, engineers are running out of room to compensate—the transistors will quit transistoring.
If the microprocessor won't do, what shall we use? The usual answer is application-specific integrated circuits (ASICs). An ASIC is custom logic for a particular application. Custom logic can be orders of magnitude more efficient than microprocessor-based solutions. Escalating ASIC design costs require ever-larger end markets to assure profitability. Communication protocols for untethered devices are still evolving, making inflexible ASICs unsuitable.
Perhaps programmable logic devices (PLDs), commonly called field-programmable gate arrays (FPGAs) in their SRAM-based implementations, will do. PLDs have the efficiency of custom logic with the field-adaptable advantages of the microprocessor. No, PLDs are too expensive, too slow, and too power-hungry.
ARC and Tensilica offer a compromise with their design-time configurable microprocessors. These companies allow designers to build custom logic into the processor core. They can achieve 10 or a 100 times the performance of the microprocessor's base instruction set for a particular application. The problem with microprocessors from ARC and Tensilica is that they're fixed at design time, making them application-specific. If you customize the instruction set to optimize a camera application, that's about all the chip can be used for.
A better solution would be to build a chip that was generic in manufacture and customized in the field. Imagine a microprocessor similar to that from ARC or Tensilica that incorporates a reconfigurable logic unit (RLU). Its function could be altered to optimize encryption in one application and to optimize a polynomial filter in another. The startup company Stretch, Inc. makes such a chip.
ARC and Tensilica have paved the way. Next will come Stretch's generic microprocessors that are physically configurable on application boundaries, in the field. That is, they can be configured in the field as an encryption engine in one application and as an impulse-response filter in another application. The microprocessor's instruction-based processing moves to a supervisory role. Custom circuit definitions (bits) are “paged” into a programmable logic area on chip. Paged logic takes the place of application-specific programming or of application-specific hardware. These components are generic in manufacture and are customized in the field, giving them a huge advantage in cost-lowering production volume.
Altera and Xilinx are best positioned for the long term, because they offer soft-core microprocessors that could be run-time configurable. They also offer development software and vast libraries of intellectual-property cores. But they will wait until the opportunity is obvious, so there's near-term opportunity for Stretch and others.
Move over, MPU
It's time for a change. For decades we've been happy with solutions based on general-purpose microprocessors and with brute-force shrinking of transistors as a way to improve performance. Recently, ARC and Tensilica have made headway with application-specific microprocessors and Stretch has introduced run-time configurable microprocessors. It's time to move the microprocessor to a supervisory role and to let application-specific logic do the work. But this custom logic must be implemented in a generic component; that is, in programmable logic. Altera and Xilinx see this opportunity. But microprocessors and DSPs are a $40-billion market. Therefore, Altera and Xilinx will move slowly and deliberately. They will let the startups take the arrows defining the market and retraining a reluctant engineering base.
Nick Tredennick is an editor for the Gilder Technology Report . He has extensive experience in microprocessor design, with nine patents in logic design and reconfigurable computing, and was named an IEEE Fellow for his contributions to the field. At Motorola he designed the 68000. At IBM's Thomas J. Watson Research Center he designed the Micro/370. Nick has been the founder of several companies including Nexgen, where he hired and managed the team that designed the microprocessor that became the AMD K-6. He earned his PhD in electrical engineering at the University of Texas. His e-mail address is .
Brion Shimamoto is an editor for the Gilder Technology Report. He was vice president of technology at Digital Domain, a visual-effects startup and CTO of OpenReach. He has worked for AT&T and NCR. At IBM's Thomas J. Watson Research Center he specified the Micro/370 microprocessor. Contact him at .