Semiconductor companies will be conferring over the next few months, as part of the International Technology Roadmap for Semiconductors (ITRC) to develop a process technology roadmap for future development in connected computing systems.
I hope they remember their past and take a second look at some of the ideas and concepts that were once considered but then thrown into the trash basket on the way to gigahertz clock rate, multimillion transistor SoCs. They weren't bad ideas, just ideas that didn't fit in with the needs of the marketplace at the time. But the times — and the market dynamics — have changed.
One of those discarded ideas was to move beyond binary logic to circuits based on multi-valued logic in which the information density and processing efficiency of a circuit could theoretically be increased substantially without any further expensive “improvements” to the underlying fabrication technology.
As part of the ITRS effort, representatives met at Semicon a few weeks ago to consider changes such as adding wireless communications technologies to the 2003 roadmap. Among the major alterations likely to be approved are the addition of technologies for wireless communications, including silicon-germanium, gallium arsenide and indium phosphide, as ways in which wireless devices can be pushed to clock rates approaching 100 GHz. These changes are important to the future of wireless technology if it is to achieve data transmission bandwidths equivalent to or exceeding those on the wired Internet.
I find the references to silicon-germanium one of the most interesting and tantalizing aspects of this roadmap, considering how compatible it is with the use of multi-valued logic. Beyond the improvements in performance that have garnered the most attention, SiGe's greatest impact may lie in the fact that like GaAs, GaAsP, InP and other exotic combinations, transistors built with it are heterojunction devices which are inherently capable of producing multiple threshold levels.
Theoretically, SiGe could be used to build devices that move beyond simple 0/1, on-off based binary logic. Such structures can reliably generate multiple signal levels that are easily discriminated. They could be used to build 3-base, 4-base, and higher logic functions, effectively increasing a device's information density without further shrinking the transistor structure. This option is something that should at least be considered as we move ultimately into the sub-nanometer range, where we are already facing problems relating to the cost of the fabrication equipment and more fundamentally, quantum uncertainties.
Way back when the industry was moving to two-to-four-micron geometries, most of the mainstream semiconductor companies started investigating ternary and quaternary logic circuitry because they felt that the shift in manufacturing equipment required to move to smaller geometries would be too expensive.
Those who were building mainstream 16-bit microcontrollers and microprocessors were especially interested after doing a few quick calculations on the back of a napkin. For example, according to my calculations a 16-bit microcomputer with on-board memory has access to no more than 216 bits of directly accessible memory (about 65k bits), while that same microcomputer with memory based on ternary logic would have direct access to 316 or 43 Mbits of memory.
But there were stumbling blocks in the way. For one thing, because they were using homogenous silicon structures such as silicon, they had to come up with all sorts of circuit tricks to express multi-valued logic using binary gate structures. But work-arounds were found, and Intel, Fairchild, National Semiconductor, Signetics (now Philips), and Motorola all had products on the market that had ternary or quaternary logic hidden inside.
At about the same time, the industry was looking for higher speed alternatives to silicon transistors to boost clock rates. As they looked at various combinations of gallium arsenide and other compounds, they realized that what gave such non-silicon transistors their speed was their heterojunction nature. It was then that early work began on trying to create some silicon-friendly hetero-junction structure that could achieve comparable performance.
But researchers at IBM, Motorola, TI and several universities also noticed that heterojunction devices, silicon-based or not, had another interesting feature — they were inherently multi-threshold, able to discriminate and generate multiple signal levels. This ability overcame several problems with earlier attempts to use binary gate structures.
Previously, to get around the inability to reliably generate and detect multiple thresholds in binary silicon gates, it was necessary to come up with silicon-greedy logic structures that could express multi-valued logic. But the designs were problematic because the state of the industry at the time was good only at discriminating two logic levels and it would have required a lot of work to discriminate three or four.
Now, not only do we have SiGe transistor structures that are inherently friendly to multi-valued logic, the industry has become quite good at generating and discriminating multiple voltage and current thresholds.
But there is a third problem, not in the silicon itself, but in engineers' willingness to move beyond the binary logic way of thinking that has become second nature to them. The way that Intel and other companies got beyond that problem in the past was to keep the engineer away from the ternary and quaternary logic, incorporating interface circuits that converted signals into and out of the multi-threshold core. That was very costly in silicon area at the time, which at two-to-four microns the industry could ill afford. Now, at the multimillion-gate, sub-nanometer level, that extra silicon need not be so costly.
But will engineers be willing to move away from the safe world of binary logic, even if the multi-valued logic is well hidden? Despite the fact that all of the theoretical work on multi-valued logic that is available, we may be faced with a situation similar to that of the missionary in an apocryphal story I heard in an undergrad class in anthropology.
It seems that after a year or so of trying to teach natives deep in the jungle how to count using the decimal system, the missionary was meeting with total failure. His pupils either did not get it or did not want to get it. When he asked an anthropologist in the village why, the response was that this particular tribe had a numbering system that consisted of zero, one and many. The tribe members in their ordinary life up to then had no reason to consider a numbering system that offered them more choices. They were not aware of all the complications of modern life that would require a more sophisticated numbering system.
Unlike these mythical aborigines, I think that the economics of semiconductor manufacturing now is forcing us to move beyond zero and one. We are already considering other old ideas previously thrown into the wastebasket of history, such as silicon-on-insulator, asynchronous logic, ferro-electric nonvolatile RAM and even SiGe for its bandwidth. Shouldn't we also take another look at multi-valued logic?
Binary logic is like driving through Manhattan and only to be able to drive straight and make right turns. Ternary logic is being able to drive straight and turn left and right. Not only can you get somewhere potentially faster, in a one-way grid, you now can reach places you couldn't reach before.
We will have to work with 729 commutative functions in ternary logic as opposed to 8 in binary logic. So more is probably not always advantageous.
One of the handicaps in ternary and multi-value logic is that mostly arithmetical examples are used to demonstrate the benefits. Though of course of great importance, many engineers will not design arithmetical circuits.
Personally I will point to applications such as ternary and multi-value signal scrambling as an area for profitable applications. If I had time I would design a game like Towers of Hanoi or Nimh in ternary or quaternary logic and compare it to a binary design.
If the benefits are attractive complexity of ternary logic will not be a deciding factor in its application. Complex functions, Fourier Analysis, Laplace Transforms, State Space Equations, etc., all have been claimed to be to difficult for the working EE.
Asynchronous vs Synchronous is analogous to: 3-dimensional design vs 2-D ones. Digital vs Analog
Synchronous methodologies are simply a way to break down real world asynchronous problems into smaller and easily digestible bits, to be given out to the many less talented engineers to solve, who required tools to assist them. Or rather for the general engineering community to understand and use easily.
Asynchronous design probably require a very different mindset which few people can master. Synchronous design will never be fully optimised. Asynchronous design will never be easily mastered by most engineers, which lead me to predict that the future is of a mixed-hybrid one, especially for large design. A fully asynchronous design, would probably be more possible for small-scale designs which on the other don't quite require the benefits of such design,which still must work with the synchronous-dominated digital world.