The Death of ASICs - Embedded.com

The Death of ASICs

The days of custom logic are numbered. The evolution and history of microprocessors point the way to the future.

Reader responses to my article “The Death of Hardware Engineering” in the March issue (p.53) were so numerous and strongly felt that I thought I'd poke my face into the hornets' nest again. This month it's about processors versus custom hardware. As before, I'm sure some of you will agree and some will disagree; either way, let me know what you think. So, without further ado …

It's a good time to be a programmer. Writing software is cheaper than creating hard-wired logic, it's easier and quicker to correct mistakes, and, best of all, microprocessors are getting faster than other silicon chips.

The very first microprocessors were created in the 1970s to replace hard-wired logic (in other words, they were embedded processors). In the 1980s and 1990s custom-designed gate arrays and ASICs took off as people discovered how to design their own chips. Now the pendulum is swinging back as engineers, investors, and managers rediscover the joy of programming.

John Bourgoin, the CEO of MIPS Technologies, opened his keynote speech at Embedded Processor Forum in April with the declaration that, “given the choice, designers will always choose the most flexible, least risky solution possible.” His conclusion was as predictable as a Dick Cheney cardiogram: general-purpose microprocessors are more flexible and less risky than custom ASIC hardware.

To bolster his argument, Bourgoin pointed out that the costs of ASIC development are rising, the performance of his company's microprocessors is improving, CPU prices are falling, and, somewhat tangentially, that the number of transistors on a silicon wafer is increasing.

Although chief executive Bourgoin deftly managed to compare areal density with pricing-like apples to orangutans-he does have a point. Microprocessors are becoming more capable without becoming more expensive. On the other hand, custom chips are already more expensive to design than just a few years ago, and those costs are vectoring skyward. ASICs have their place, but it looks like they'll increasingly be a luxury of the well-heeled embedded developer.

Speed-ups

Silicon speed gets about 25% faster every year, thanks to the wizardry of the world's semiconductor manufacturers. This affects all types of chips equally, microprocessors and multiplexers alike. Transistor density also increases every year or so, meaning chip designers can cram more circuits onto a new chip and still have it run faster than last year.

Microprocessors, though, are getting faster faster. They speed up about 60% per year, well above the average for their constituent circuits. That's because processors are getting “smarter” as well as faster. Performance per clock cycle picks up when new microprocessors get superscalar execution, branch prediction, media extensions, and other architectural improvements above and beyond the silicon speed-up.

This disparity means that CPU chips get faster at more than double the rate of more pedestrian silicon chips. As if Moore's Law wasn't generous enough, programmers enjoy a double helping of constant improvement. For embedded programmers, this means more horsepower to tackle tougher problems.

Cost equation

Paradoxically, the rise in transistor density that enabled us to design custom ASIC chips in the first place is now making the cost of such a design prohibitive. Any new chip today has to contain at least a few million transistors or it's too small to manufacture. Fabs and foundries scramble to stay on the leading edge, so anything less than the latest 0.18-micron processing carries a trailing edge “tax” for using old technology. Depreciation on a semiconductor fab is about $1.5 million every day, so overhead costs are breathtaking. Paying for the million transistors on your chip isn't the problem; it's paying for a time slot in that ultra-expensive fab that'll set you back.

As if that weren't enough, the photo masks used to create custom chips are also ultra-expensive. Each new mask set costs about $500,000 today, and the costs are growing, inexorably and inescapably, toward $1 million. Any change to the chip requires new masks, another half-million dollars, and another four months of everybody's time. It's not much of an exaggeration to say that designing an ASIC is cheap, but redesigning it is expensive.

This puts an awfully big burden on the hardware designers. If the chip is flawed, it's all over and there's no cheap way to fix it. Depending on the type of malfunction, the chip might be completely useless (in which case it becomes a tie tack) or it might be partially useful but with some reduced functionality. So all things being equal, it's less risky to leave the hardware alone and change software wherever you can.

In risk-averse times, your company can simply stop buying microprocessors if customer demand goes limp. But if you've developed an ASIC, that cost is already sunk. Buying fewer ASICs won't save your employer very much money. You don't recall a Saturn V rocket that's already halfway to the moon in order to save on fuel expenses.

In his specialty newsletter, Dynamic Silicon, Nick Tredennick points out that the arrival of the first microprocessors vastly broadened the market for embedded systems because engineers didn't need as much hardware-design experience. The embedded-systems programmer was born, increasing the talent pool tenfold, in Tredennick's estimation.

This led to a virtuous cycle, where the growing number of uses for microprocessors increased sales of chips, which lowered CPU prices, which led to more new kinds of embedded systems, and so on. The result is that CPU companies can spread the cost of developing a new microprocessor across millions of units sold to unrelated customers around the world. The tradeoff to programmers is the higher power consumption of a CPU and the need to settle for a generic processor rather than application-specific hardware.

The balance of power

The first microprocessors replaced state machines. They provided greater flexibility in return for reduced (power) efficiency. Microprocessors are inherently wasteful of power because a CPU is necessarily a superset of required features. Software merely switches parts of the CPU on and off over time as needed.

The power consumption of any chip is determined, more or less, by the number of transistors it contains. The more complex the chip, the more power it draws-unless it's a microprocessor. Strangely enough, adding more hardware to a CPU can reduce its power consumption.

Software consumes more power than hardware. That's because microprocessors have to fetch code, decode instructions, wiggle external buses, access internal registers, and generally blunder their way through every assignment. An MIT study showed that only 2% of a particular RISC microprocessor's energy went into adding two numbers together; the other 98% was microprocessor overhead. At the other extreme are special-purpose ASICs that execute no software at all and are therefore more power efficient.

If we assume that code size is inversely proportional to CPU complexity, it follows that simpler CPUs need more code than complex CPUs (RISC vs. CISC code density). Code costs power. It's more efficient to have a complex CPU execute a few simple instructions than to have a streamlined CPU executing millions of lines of code. Another blow to the RISC daydream.

Reduce the risk

We all work on projects that get canceled sometimes, but the mortality rate for ASIC projects is high. Many hardware IP vendors plan their business assuming one-half of all their ASIC customers will never produce a single chip. ASIC projects get canceled because the market can change in the time it takes to define a chip until the time it's ready. Changing the ASIC design is prohibitively expensive, as we have seen. Changing software, while not trivial, is almost always cheaper, easier, and quicker. Ideally, the marketing department could change a product on the morning before it ships. With software-defined products, that's nearly possible. With ASIC-based products, it's clearly not.

It's a good time to be a programmer. CPU performance outpaces the industry as a whole and new CPU architectures continually heave into view that tempt and entice the adventuresome. Now you can even define your own instruction set using configurable processors. New startups push adaptive, or reconfigurable, hardware that change the idea of programming to include temporal hardware management. It seems there are no obstacles hardware designers aren't willing to throw in our way. esp

Jim Turley is an analyst, columnist, and speaker specializing in microprocessors and semiconductor intellectual property. He was past editor of Microprocessor Report and Embedded Processor Watch. For a good time, write to .

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.