The end of Moore's law - Embedded.com

The end of Moore’s law

Gordon Moore's prediction about the increasing number of components that can be put on an integrated circuit is very much alive and well today. However, this “law” and its often-cited corollaries relating to device speed and power dissipation, which are often confused with the law itself, are in trouble.

The August 19, 1965 issue of Electronics magazine published the original article on semiconductor device scaling written by one Gordon Moore, who was employed as the director of Fairchild Semiconductor's R&D laboratories.1 Moore's article, entitled “Cramming more components onto integrated circuits,” predicted an exponential growth in the number of “components” (meaning transistors, diodes, resistors, and capacitors) that would be built on an integrated circuit (IC) as semiconductor fabrication expertise grew. His initial observation was that the component-doubling time was 12 months, which he later revised to a somewhat slower 24 months. Since then, others have tried to more closely fit his prediction to actual results by changing the doubling interval to 18 months.

Moore synthesized this long-lived truism from very few data points. The IC had only been in existence since late 1959, barely five years before Moore's article appeared. By 1965, integrated-circuit fabrication technology had progressed only to the point where it could put 50 or 60 components on a chip. Moore conjured his doubling trick using the five data points he had and put the industry on a breakneck course that has lasted 40 years and will surely continue for at least another 10 to 15 years. However, the doubling time, now codified by the Semiconductor Industry Association (SIA) International Technology Roadmap for Semiconductors (ITRS), may be stretching out to three years as we approach the fundamental atomic limits of the materials used for semiconductor fabrication. Metal widths on advanced ICs are now only a few dozen atoms wide and gate oxides are less than 10 atoms thick.

The law won't live forever
Ten years after his first article appeared, Moore spoke about his “law” (really just an observation) at the IEEE's 1975 International Electron Devices Meeting and said he saw “no present reason to expect a change in this trend.”2 However, over the years, industry experts have frequently predicted the law's demise. For example, the industry was facing the transition from 3-micron and 2-micron lithographies down to 1-micron technology in 1984. At that time, the fear was not that semiconductor fabrication technology would fail to keep pace. Rather, the worry was that IC designers' ability to design such complex chips was falling behind the abilities of the manufacturing processes. Bob Kirk and Tom Daspit of American Microsystems published these paragraphs in their article “Making the design transition,” published in a 1984 issue of Semiconductor International magazine:

“The ability to design complex 'one-micron design rule' ICs is, in fact, a key in the semiconductor industry's transition to using such processes effectively. . . . The capabilities of one micron technology will not be fully exploited until sufficiently powerful tools in these areas emerge.”

Note that this description is remarkably similar to the situation the industry faces today with nanometer design rules.

The 1984 article continues:

“Design engineers are, in fact, now beginning the transition from pencil-and-paper logic design to interactive design using engineering workstations. . . . It should be noted that wider use of automated design aids is contingent on the acceptance of some reduction in the use of circuit area, known as area penalty. . . . Automated tools generally waste some silicon area; they are not as area efficient as human designers.

While industry pressure exists to reduce area penalty, the sheer complexity of one-micron circuits dictates that some area be traded off in favor of completing designs in a timely way with automated tools. The pressure to reduce area penalty can be softened by trading off total design cost against the total manufacturing costs.”3

So, the issues of moving IC hardware designers to higher abstraction levels and trading off silicon for design complexity and cycle time have been with us for at least 20 years. Even 20 years ago, Moore's law was providing more components that could be accommodated by the design tools and methods in use.

The same is true today even though designers have jumped up several levels of abstraction, from hand-drawn transistors, through schematic-drafting CAD systems, to hardware-description languages and logic synthesis. Moore's law continues to keep the raw silicon capabilities ahead of our design abilities and will probably do so for at least another decade. After that, we'll need to find another medium to work in because silicon will be worked out. Many people, including Moore himself, thought things might not get as far as they have. As recently as 1995, 30 years after writing that first article predicting exponential device growth in semiconductors, Moore said:

“As we go below 0.2 micron–the SIA road map says 0.18 micron line widths is the right number–we must use radiation of a wavelength that is absorbed by almost everything. Assuming more or less conventional optics, problems with depth of field, surface planarity and resist technology are formidable, to say nothing of the requirement that overlay accuracy must improve as fast as resolution if we are to really take maximum advantage of the finer lines.”4

Moore delivered these words to the members of the International Society for Optical Engineering. Not very optimistic for the creator of Moore's law–and Moore went even further in describing his chart for lithographic progress:

“My plot goes only to the 0.18 micron generation, because I have no faith that simple extrapolation beyond that relates to reality.”

Getting even more serious, Moore said:

“Beyond this is really terra incognita , taking the term from the old maps. I have no idea what will happen beyond 0.18 microns.

In fact, I still have trouble believing we are going to be comfortable at 0.18 microns using conventional optical systems. Beyond this level, I do not see any way that conventional optics carries us any further. Of course, some of us said this about the one micron level. This time however, I think there are fundamental materials issues that will force a different direction.”4

The end is not near
It didn't happen the way Moore expected. Today, 0.18-micron (now generally called 180nm) lithography is middle-of-the-road stuff and no one considers it miraculous, while quarter-micron (250nm) lithography is trailing-edge and 0.35-micron technology–still in production–is absolutely Neolithic. Actually, even 130nm lithography is quite manufacturable today although it took a lot of engineering magic and ingenuity to “make it so.”

Even 90nm ICs are in production at this time, with 65nm on the horizon. In fact, the FPGA industry currently uses 90nm fabrication as its top-of-the-line technology foundation and 65nm chips are being fabricated, although no one would say that these are yet in volume production. However, 65nm production is clearly coming, of that there is no doubt. Ted Vucuverich, senior vice president of advanced R&D at Cadence, says the jump from 90nm to 65nm design rules might be quick indeed because the transition requires no changes to the materials flow in the fabrication process so designers get the immense benefit of the next process node without much of the pain normally associated with such a change.

Way back in 1988 IBM had fabricated the world's smallest transistor using 70nm design rules, an amazing feat for the time. That 70nm transistor ran on 1V when almost all digital ICs at the time ran on 5V. Although the 70nm transistor required liquid-nitrogen cooling to combat thermal noise, the IBM researchers saw no fundamental reason why such tiny field effect transistors (FETs) couldn't run at room temperature. Today we know they can–quite well, in fact. Back in 1989, leading-edge production technology used 0.7-micron or 0.8-micron design rules. That was 10 what IBM had achieved in 1988 with its 70nm design, and we're just starting to put such small transistors into production, some 17 years later.

So what is the smallest transistor made today? Just how far has Moore's law been stretched this early in the 21st century? In late 2003, NEC announced it had built an FET with a 5nm gate length. Intel and AMD have publicly discussed fabrication of transistors with 10nm gate lengths, and IBM built one with a 6nm gate length in 2002. These geometries are approximately 10 smaller than what's used in today's most advanced production devices. So we already know that transistors will continue to work at geometries an order of magnitude smaller than what's broadly manufacturable today and we already know ways to make such transistors. The ability to make such small transistors in production volumes will undoubtedly follow because the economic incentives to make things smaller and cheaper remain.

The corollaries are dying
Just because Moore's law seems alive and well doesn't mean that everything is fine in the land of the ever-shrinking transistor. Two corollaries to Moore's law–often mistaken for the real thing by the mass-market press–are clearly dying. Those corollaries relate to device speed and power dissipation. For nearly 40 years, smaller transistors also meant faster transistors that ran on less power (at least individually). Those corollaries started to break down somewhere around the turn of the century.

Nowhere is this effect more apparent than in the 25-year fight for the world's fastest PC microprocessors. Intel and AMD have been locked in a PC-processor death match for more than 20 years and, for most of that time, processor clock frequency largely determined which company was “winning.” (Actually, Intel was always winning in terms of sales but the competition has wavered back and forth with respect to technology.)

In the early 1980s, Intel signed a cross-license agreement with AMD to manufacture x86 processors starting with the 8086 and 8088. Intel then introduced the 80286 processor at 12.5MHz. AMD, being the second source, sought a sales advantage over prime source Intel and found one by introducing a faster, 16MHz version of the 80286. Intel fought back with its 80386 chip, which ran at 33MHz. To slow its partner/competitor, Intel refused to hand over the design for the 80386 to AMD. This naturally led to a lawsuit.

Meanwhile, AMD fought back on the technological front by introducing a reverse-engineered 80386 running at 40MHz. This race went on for years. In 1997, AMD's K6 processor hit 266MHz and Intel countered by introducing the Pentium II processor running at 266MHz just three weeks later. Three years after that, AMD's Athlon processor was the first x86 processor to achieve a 1GHz clock rate.

Finally, Intel really got the message about clock frequencies. Clock rate was clearly king in the processor wars–at least, it was in customers' minds. As a result, Intel redesigned Pentium's internal microarchitecture to emphasize clock rate (though not necessarily real performance) by creating a very deep execution pipeline; the resulting Pentium 4 processor put Intel substantially ahead in the clock-rate war.

All of these clock-rate escalations relied on Moore's law-scaling to achieve the added speed. Faster clock rates automatically accompanied smaller transistor designs through the 1970s, 1980s, and 1990s. However, this isn't true anymore. Intel and AMD are no longer trying to win these clock fights because that war is essentially over.

Additional Moore's law transistor scaling produces smaller transistors, so more of them fit on a chip. But these shrunken transistors don't necessarily run any faster, for a number of technical reasons, and they also don't run at lower power due to related factors. There are some additional processing tricks such as strained silicon and SoI (silicon on insulator) that can achieve higher clock speeds, but they no longer come as an automatic benefit of Moore's law.

There's been another casualty of the clock-rate war: power dissipation. The original 8088 microprocessor in the IBM PC ran at 4.77MHz and required no heat sink. The 80286, '386, and early '486 processors also ran without heat sinks. At about 100MHz, though, PC processors started needing heat sinks. Eventually, the heat sinks required their own integrated fans. Some high-end PCs now come equipped with active liquid-cooling systems for the processor. This isn't progress; fans add noise and have reliability issues of their own. These processor fans are essential, however, because it's become very difficult to extract the rapidly growing amount of waste heat from these processors as their clock rates have climbed.

Parallelism is the future
Consequently, Intel and AMD are moving the PC-processor battlefront from clock speed to parallelism: getting more work done per clock cycle. Multiple processors per chip is now the name of the game. Perhaps not coincidentally, the same week that the industry was celebrating the 40th anniversary of Moore's eponymous law, both Intel and AMD announced dual-core versions of their top-end PC processors. The companies have concluded that achieving faster performance by merely escalating the clock rate is a played-out tune. However, Moore's law is still delivering more transistors every year so Intel and AMD can put those numerous, smaller transistors to work by fabricating two processors on one semiconductor die.

None of this is new in the world of SoC (system-on-chip) design because SoC designers have never been able to avail themselves of the multi-gigahertz clock rates achieved by processor powerhouses Intel and AMD. SoC design teams don't have hundreds of engineers to hand-design the critical-path circuits, which is the price for achieving these extremely high clock rates. All SoC designers can, however, avail themselves of the millions of transistors per chip provided by Moore's law in the 21st century. As a result, many companies have been developing SoC designs with multiple processors for several years.

The ITRS Design Technical Working Group (ITRS Design TWG) recently met in Munich to discuss changes to the next official ITRS. Part of that discussion involved a forecast in the increase in the number of processing engines used in the average SoC from 2004 to 2016. The current forecast starts with an average of 18 “processing engines” in each SoC in 2004, jumping to 359 processing engines per SoC for 2016. (Today, Tensilica's customers incorporate an average of six processors per SoC, and one customer has developed a networking chip with 188 active processors and four spares.)

The ITRS Design TWG doesn't say exactly what those 359 processing engines will be doing in the year 2016, but it estimates that they'll consume about 30% of the SoC's die area–roughly the same amount of area (as a percentage of total area) consumed by last year's 18 processing engines. In another decade, SoC designers will clearly get a lot more processing power for the silicon expended, far more than can be expected from Moore's law scaling.

Parallelism is clearly the path to performance in the 21st century. Exploiting parallelism adheres to and exploits the true Moore's law, which is still very much alive, and veers away from the dying corollary of escalating clock rate. Boosting parallelism, which is inherent in a large number of end applications, lowers the required clock rate and therefore lowers power dissipation. Given the thrust of processor and SoC design over the last decade, dropping the clock rate seems counterintuitive. Nevertheless, the physics demand it.

Moore's law continues to benefit all IC designers although its very handy corollaries seem to be dying out. With work and just a bit of luck, Moore's law will continue to benefit the industry for at least another decade. Even after 40 years, it looks like Moore's law still has a few birthdays left.

Steve Leibson is Tensilica's technology evangelist. In prior lives, he was chief editor for all the top design magazines, except this one. E-mail him at .

Endnotes:

  1. Moore, Gordon E., “Cramming more components onto integrated circuits,” Electronics , vol. 38, no. 8, pp. 114-117, April 19, 1965.Back
  2. Moore, Gordon E., “Progress in Digital Integrated Electronics,” Technical Digest 1975, International Electron Devices Meeting , IEEE, 1975, pp. 11-13.Back
  3. Kirk, B. and T. Daspit, “Making the design transisition,” Transition to One Micron Technology , a reprint from Semiconductor International, 1984.Back
  4. Moore, Gordon E., “Lithography and the Future of Moore's Law,” Optical/Laser Microlithography VIII: Proceedings of the SPIE , vol. 2440, 1995, pp. 2-17.Back

Reader Response


Each challenge to Moore's Law seems to have fallen before the effects of Bob's Law: Once you invent machines that help you think, all bets are off.

– R L Watkins
United States


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.