The semiconductor revolution -

The semiconductor revolution


In part 3 of Jack's series honoring the 40th anniversary of the microprocessor, the minis create a new niche—the embedded system.

We're on track, by 2010, for 30-gigahertz devices, 10 nanometers or less, delivering a tera-instruction of performance.   —Pat Gelsinger, Intel, 2002

We all know how in 1947 Shockley, Bardeen, and Brattain invented the transistor, ushering in the age of semiconductors. But that common knowledge is wrong. Julius Lilienfeld patented devices that resembled field-effect transistors (although they were based on metals rather than modern semiconductors) in the 1920s and 30s (he also patented the electrolytic capacitor). Indeed, the United States Patent and Trademark Office rejected early patent applications from the Bell Labs boys, citing Lilienfeld's work as prior art.

Semiconductors predated Shockley et al by nearly a century. Karl Ferdinand Braun found that some crystals conducted current in only one direction in 1874. Indian scientist Jagadish Chandra Bose used crystals to detect radio waves as early as 1894, and Greenleaf Whittier Pickard developed the cat's whisker diode. Pickard examined 30,000 different materials in his quest to find the best detector, rusty scissors included. Like thousands of others, I built an AM radio using a galena cat's whisker and a coil wound on a Quaker Oats box as a kid, though by then everyone was using modern diodes.

Click on image to enlarge.
For image rights, see Wikipedia's entry on “cat's-whisker detector.”

As I noted last month, RADAR research during World War II made systems that used huge numbers of vacuum tubes both possible and common. But that work also led to practical silicon and germanium diodes. These mass-produced elements had a chunk of the semiconducting material that contacted a tungsten whisker, all encased in a small cylindrical cartridge. At assembly time workers tweaked a screw to adjust the contact between the silicon or germanium and the whisker. With part numbers like 1N21, these were employed in the RADAR sets built by MIT's Rad Lab and other vendors. Volume 15 of MIT's Radiation Laboratory Series, titled “Crystal Rectifiers,” shows that quite a bit was understood about the physics of semiconductors during World War II. The title of volume 27 tells a lot about the state of the art of computers: “Computing Mechanisms and Linkages.”

Early tube computers used crystal diodes. Lots of diodes: the ENIAC had 7,200, Whirlwind twice that number. I have not been able to find out anything about what types of diodes were used or the nature of the circuits, but imagine an analog with 1960s-era diode-transistor logic.

Happy Birthday, 4004
Jack Ganssle's series in honor of the 40th anniversary of the 4004 microprocessor.

Part 1: The microprocessor at 40–The birth of electronics
The 4004 spawned the age of ubiquitous and cheap computing.

Part 2: From light bulbs to computers  
From Patent 307,031 to a computer laden with 100,000 vacuum tubes, these milestones in first 70 years of electronics made the MCU possible.

Part 3: The semiconductor revolution
In part 3 of Jack's series honoring the 40th anniversary of the microprocessor, the minis create a new niche—the embedded system.

While engineers were building tube-based computers, a team lead by William Shockley at Bell Labs researched semiconductors. John Bardeen and Walter Brattain created the point contact transistor in 1947, but did not include Shockley's name on the patent application. Shockley, who was as irascible as he was brilliant, in a huff went off and invented the junction transistor. One wonders what wonder he would have invented had he been really slighted.

Point contact versions did go into production. Some early parts had a hole in the case; one would insert a tool to adjust the pressure of the wire on the germanium. So it wasn't long before the much more robust junction transistor became the dominant force in electronics. By 1953 over a million were made; four years later production increased to 29 million. That's exactly the same number as a single Pentium III used in 2000.

The first commercial part was probably the CK703, which became available in 1950 for $20 each, or $188 in today's dollars.

Meanwhile tube-based computers were getting bigger and hotter and were sucking ever more juice. The same University of Manchester that built the Baby and Mark 1 in 1948 and 1949 got a prototype transistorized machine going in 1953, and the full-blown model running two years later. With a 48- (some sources say 44) bit word, the prototype used only 92 transistors and 550 diodes! Even the registers were stored on drum memory, but it's still hard to imagine building a machine with so few active elements. The follow-on version used just 200 transistors and 1,300 diodes, still no mean feat. (Both machines did employ tubes in the clock circuit.) But tube machines were more reliable as this computer ran about an hour and a half between failures. Though deadly slow it demonstrated a market-changing feature: just 150 watts of power were needed. Compare that to the 25 KW consumed by the Mark 1. IBM built an experimental transistorized version of their 604 tube computer in 1954; the semiconductor version ate just 5% of the power needed by its thermionic brother. (The IBM 604 was more calculator than computer.)

The first completely-transistorized commercial computer was the  . . . uh . . . well, a lot of machines vie for credit and the history is a bit murky. Certainly by the mid-1950s many became available. Last month I claimed the Whirlwind was important at least because it spawned the SAGE machines. Whirlwind also inspired MIT's first transistorized computer, the 1956 TX-0, which had Whirlwind's 18 bit word. And, Ken Olsen, one of DEC's founders, was responsible for the TX-0's circuit design. DEC's first computer, the PDP-1, was largely a TX-0 in a prettier box. Throughout the 1960s DEC built a number of different machines with the same 18-bit word.

The TX-0 was a fully parallel machine in an era where serial was common. (A serial computer processed a single bit at a time through the arithmetic logic unit [ALU].) Its 3,600 transistors, at $200 a pop, cost about a megabuck. And all were enclosed in plug-in bottles, just like tubes, as the developers feared a high failure rate. But by 1974 after 49,000 hours of operation fewer than a dozen had failed.

The official biography of the machine (RLE Technical Report No. 627) contains tantalizing hints that the TX-0 may have had 100 vacuum tubes, and the 150-volt power supplies it describes certainly aligns with vacuum-tube technology.

IBM's first transistorized computer was the 7070, introduced in 1958. This was the beginning of the company's important 7000 series, which dominated mainframes for a time. A variety of models were sold, with the 7094 for a time occupying the “fastest computer in the world” node. The 7094 used over 50,000 transistors. Operators would use another, smaller, computer to load a magnetic tape with many programs from punched cards, and then mount the tape on the 7094. We had one of these machines my first year in college. Operating systems didn't offer much in the way of security, and we learned to read the input tape and search for files with grades.

The largest 7000-series machine was the 7030 “Stretch,” a $100 million (in today's dollars) supercomputer that wasn't super enough. It missed its performance goals by a factor of three, and was soon withdrawn from production. Only nine were built. The machine had a staggering 169,000 transistors on 22,000 individual printed circuit boards. Interestingly, in a paper named “The Engineering Design of the Stretch Computer,” the word “millimicroseconds” is used in place of “nanoseconds.”

While IBM cranked out their computing behemoths, small machines gained in popularity. Librascope's $16k ($118k today) LGP-21 had just 460 transistors and 300 diodes, and came out in 1963, the same year as DEC's $27k PDP-5. Two years later DEC produced the first minicomputer, the PDP-8, which was wildly successful, eventually selling some 300,000 units in many different models. Early units were assembled from hundreds of DEC's “flip chips,” small PCBs that used diode-transistor logic with discrete transistors. A typical flip chip implemented three 2-input NAND gates. Later PDP-8s used integrated circuits; the entire CPU was eventually implemented on a single integrated circuit.

But woah! Time to go back a little. Just think of the cost and complexity of the Stretch. Can you imagine wiring up 169,000 transistors? Thankfully Jack Kilby and Robert Noyce independently invented the IC in 1958/9. The IC was so superior to individual transistors that soon they formed the basis of most commercial computers.

Actually, that last clause is not correct. ICs were hard to get. The nation was going to the moon, and by 1963 the Apollo Guidance Computer used 60% of all of the ICs produced in the US, with per-unit costs ranging from $12 to $77 ($88 to $570 today) depending on the quantity ordered. One source claims that the Apollo and Minuteman programs together consumed 95% of domestic IC production.

Every source I've found claims that all of the ICs in the Apollo computer were identical: 2,800 dual three-input NOR gates, using three transistors per gate. But the schematics show two kinds of NOR gates, “regular” versions and “expander” gates.

The market for computers remained relatively small till the PDP-8 brought prices to a more reasonable level, but the match of minis and ICs caused costs to plummet. By the late 1960s everyone was building computers. Xerox. Raytheon (their 704 was possibly the ugliest computer ever built). Interdata. Multidata. Computer Automation. General Automation. Varian. SDS. Xerox. A complete list would fill a page. Minis created a new niche: the embedded system, though that name didn't surface for many years. Labs found that a small machine was perfect for controlling instrumentation, and you'd often find a rack with a built-in mini that was part of an experimenter's equipment.

The PDP-8/E was typical. Introduced in 1970, this 12-bit machine cost $6,500 ($38k today). Instead of hundreds of flip chips, the machine used a few large PCBs with gobs of ICs to cut down on interconnects. Circuit density was just awful compared with today's densities. The technology of the time was small scale ICs that contained a couple of flip flops or a few gates, and medium scale integration. An example of the latter is the 74181 ALU, which performed simple math and logic on a pair of four bit operands. Amazingly, TI still sells the military version of this part. It was used in many minicomputers, such as Data General's Nova line and DEC's seminal PDP-11.

The PDP-11 debuted in 1970 for about $11k with 4k words of core memory. Those who wanted a hard disk shelled out more: a 256KW disk with controller ran an extra $14k ($82k today). Today's $100 terabyte drive would have cost the best part of $100 million.

Experienced programmers were immediately smitten with the PDP-11's rich set of addressing modes and completely orthogonal instruction set. Most prior, and too many subsequent, instruction set architectures were constrained by the costs and complexity of the hardware, and were awkward and full of special cases. A decade later IBM incensed many by selecting the 8088, whose instruction set was a mess, over the orthogonal 68000 which in many ways imitated the PDP-11. Around 1990 I traded a case of beer for a PDP-11/70, but eventually was unable to even give it away.

Minicomputers were used in embedded systems even into the 1980s. We put a PDP-11 in a steel mill in 1983. It was sealed in an explosion-proof cabinet and interacted with Z80 processors. The installers had for reasons unknown left a hole in the top of the cabinet. A window in the steel door let operators see the machine's controls and displays. I got a panicked 3 a.m. call one morning—someone had cut a water line in the ceiling. Not only were the computer's lights showing through the window—so was the water level. All of the electronics were submerged. I immediately told them the warranty was void, but over the course of weeks they dried out the boards and got it working again.

I mentioned Data General: they were probably the second most successful mini vendor. Their Nova was a 16-bit design introduced a year before the PDP-11, and it was a pretty typical machine in that the instruction set was designed to keep the hardware costs down. A bare-bones unit with no memory ran about $4k—lots less than DEC's offerings. In fact, early versions used a single 74181 ALU with data fed through it a nibble at a time. The circuit boards were 15″ x 15″, just enormous, populated with a sea of mostly 14- and 16-pin DIP packages. The boards were typically two layers, and often had hand-strung wires where the layout people couldn't get a track across the board. The Nova was a 16-bit machine, but peculiar as it could only address 32 KB. Bit 15, if set, meant the data was an indirect address (in modern parlance, a pointer). It was possible to cause the thing to indirect forever.

Before minis, few computers had a production run of even 100 (IBM's 360 was a notable exception). Some minicomputers, though, had were manufactured in the tens of thousands. Those quantities would look laughable when the microprocessor started the modern era of electronics.

Jack Ganssle () is a lecturer and consultant specializing in embedded systems development. He has been a columnist with Embedded Systems Design and for over 20 years. For more information on Jack, click here.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.