The transistor: sixty years old and still switching

Sixty years ago this month, scientists at Bell Labs demonstrated the most important invention of the 20th century: the first real transistor.

It's hard to say when the electronics age started, but William Sturgeon's 1825 development of the electromagnet laid the seeds that led to Joseph Henry's crude telegraph in 1830, which was the first electrical system used to communicate over long distances (a mile). Just 14 years later, Samuel Morse sent a message by telegraph over a 40-mile link he had strung between Washington DC and Baltimore.

Considering the primitive nature of telegraphy at the time, it's astonishing just how quickly the demand grew. By 1851 Western Union was in business, and in the same decade Cyrus Field had connected the Old and New Worlds via a fragile cable that failed a mere three weeks after the first message was sent. But later attempts succeeded. Instantaneous transatlantic communication quickly lost its novelty.

Although Alexander Graham Bell's 1875 invention of the telephone is universally lauded today, it was a less than practical device till Thomas Edison came up with the carbon microphone two years later. The speaker's voice modulated a pack of carbon granules, changing the circuit's resistance and thus sending a signal to the receiver.

A number of inventors soon came up with the idea of wireless transmission, codified by Guglielmo Marconi's 1896 patent and subsequent demonstrations. Like the telephone and telegraph early radios used neither CPUs, transistors, nor vacuum tubes. Marconi, drawing on the work of others, particularly Nikola Tesla, used a high voltage and spark gap to induce electromagnetic waves into a coil and an antenna. The signals, impossibly noisy by today's standards, radiated all over the spectrum . . . but they worked. In fact, Titanic's famous SOS was broadcast using a 5 KW spark gap set manufactured by the Marconi Wireless Telegraph Company.

The circuits were electrical, not electronic.

Telephone signals, though, degraded quickly over distance while radio remained crude and of limited range. The world desperately needed devices that could control the flow of the newly discovered electron. About this time Ambrose Fleming realized that the strange flow of electricity in a vacuum Edison had stumbled on could rectify an alternating current, which has the happy benefit of detecting radio waves. He invented the first simple vacuum tube diode. But it didn't find much commercial success due to high costs and the current needed by the filament.

In the first decade of the new century, Lee de Forest inserted a grid in the tube between the anode and cathode. With this new control element, a circuit could amplify, oscillate, and switch. Those are the basic operations of any bit of electronics. With the tube, engineers learned they could create radios of fantastic sensitivity, send voices over tens of thousands of miles of cable, and switch ones and zeroes in microseconds. During the four years of World War I, Western Electric alone produced a half million tubes for the U.S. Army. By 1918, over a million a year were being made in the U.S., more than fifty times the pre-war numbers.

Electronics was born.

Electronics is defined as “the science dealing with the development and application of devices and systems involving the flow of electrons in a vacuum, in gaseous media, and in semiconductors,” and the word came into being nearly at the same time the tube was created. But that's a lousy definition. I think the difference between electrical and electronic circuits is that the latter uses “active” elements, components that rectify, switch, or amplify. The very first active devices may have been cats whisker crystals, a bit of springy wire touching a raw hunk of galena that works as a primitive diode. I can't find much about their origins, but it seems these crystals first appeared shortly before Fleming did his pioneering vacuum tube research. It's rather ironic that this, the first active element, which predated the tube, was a semiconductor, but that nearly another half century was required to “discover” semiconductors.

Radios originally sported just a few tubes but soon high-end units used dozens. In the late 1960s, I had a military-surplus 1940-era RBC radio receiver that had 19 tubes. Reputedly it cost $2,400 in 1940 (over $33k today). The $600 toilet seat is nothing new.

Increasing capability lead then, as it still does today, to ever-escalating cries for more features, speed, and functionality. The invention of radar in World War II created heavier demands for active electronics. Some sets used hundreds of tubes. Perhaps the crowning achievement of vacuum-tube technology was the ENIAC in 1946, which employed some 18,000. The machine failed every two days. Clearly, the advent of digital technology had pushed tubes to their very limits. A new sort of active element was needed, something that produced less waste heat, used vastly fewer watts, and was reliable.

Serendipitously, the very next year Walter Brattain and John Bardeen (who, with William Shockley won the 1956 Nobel Prize for this and related semiconductor work) invented the transistor. Though some claim this was the first “practical” such semiconductor, the Bell Labs scientists had actually constructed a point-contact transistor, a difficult-to-manufacture device that is no longer used and whose use was never widespread.

View the full-size image

www.porticus.org/bell/belllabs_transistor.html

Around 1950 (sources vary), Raytheon produced their CK703, the first commercially available device. At $18 each ($147 in today's inflated dollars), these simply weren't competitive with vacuum tubes, which typically cost around $0.75 each at the time. Though point-contact transistors were tantalizingly close to an ideal active element, something better was needed.

Shockley had continued his semiconductor work, and in 1948 patented the modern junction transistor. Three years later, Bell Labs demonstrated part number M1752 (photos at http://semiconductormuseum.com/PhotoGallery/PhotoGallery_ M1752.htm), though it was apparently produced only in prototype quantities.

The modern transistor was born. But it didn't immediately revolutionize the electronics industry, which continued its love affair with tubes. It wasn't till 1956 that Japan's ETL Mark 3, probably the first transistorized computer, appeared, but it used 130 point-contact transistors and wasn't a practical, saleable unit. The following year IBM started selling their 608 machine, which used 3,000 germanium transistors. It was the first commercial transistorized computer. The 608 used 90% less power than a comparable machine built using tubes. With a 100 KHz clock, 9 instructions, and 11 msec average multiplication time for two 9-digit BCD numbers, it had 40 words of core memory and weighed 2,400 pounds.

The telephone industry's demand for amplifiers accelerated the development of vacuum tubes, and it unsurprisingly snapped up semiconductor technology. As early as 1952 Bell Telephone installed the first transistorized central office equipment in New Jersey–again, using point contract transistors.

Ma Bell was started by Alexander Graham Bell of course, who started as a teacher of the deaf and who spent much of his career in service to the hearing impaired. So not surprisingly the Bell Corporation waived all patent royalties for the very first transistorized consumer product–a hearing aid, around 1953.

Old-timers probably remember Raytheon's CK-722, one of the first commercial junction transistors. It was available in 1953 for about $7 each, a lot of money in those days. I remember buying bags of random transistors from Radio Shack in the '60s that often had CK-722s, probably factory seconds. I have no memory of the cost, but as this was all allowance money it couldn't have been more than a buck or two for a bag of parts.

By late 1955 the same part cost $0.99. Moore's Law didn't exist, but the inexorable collapse of the prices of electronic components had started, entirely enabled by the new semiconductor technology.

Regency Electronics did produce the first commercial transistor radio (eponymously called the TR-1) as early as 1954. (To see videos of this four-transistor radio being assembled check out http://people.msoe.edu/~reyer/regency/index5.html.) TI, looking for a market for their new transistors, had approached a number of domestic radio manufacturers but was turned down by all but Regency. A contemporary TI press release about the TR-1 calls the components “npn grown junction, germanium triodes.” A triode was–and is–a three element vacuum tube.

By the early 1960s, consumers were infatuated with miniature radios (half of the 10 million units sold in 1959 were transistorized). Marketers, then as now anxious to differentiate their products, started using transistor counts to sell product. Although at least one vendor managed to build a radio with just two transistors (schematic here: www.transistor.org/ FAQ/two-transistor.html), and rarely were more than 8 actually used, often as many as 16 were soldered on the board–most, of course, unconnected. That may be analogous to today's GB wars. How many iPod owners come close to filling their 40 GB drives?

Today, discrete transistors seem almost like anachronisms, although they're still widely used in many demanding applications. Costs range from nearly nothing to tens of dollars or more for certain specialized parts. An IC the size of that venerable CK-722 might have hundreds of millions of transistors, each of which costs the buyer a few microcents.

Ironically, some of the problems that plagued vacuum tubes and lead to their near-demise now haunt transistorized products. In 1946, all of the computer capability in the world consumed a few hundred kilowatts. Today a single server farm sucks many megawatts. According to http://blogs.business2.com/greenwombat/2007/02/photo_originall.html, in 2005 server farms worldwide needed the equivalent of 14 one-gigawatt power plants. Google's data center in The Dalles, Oregon reputedly has cooling towers four-stories tall.

Transistors come in many varieties, the field-effect transistor (FET) being the most important. Invented in 1960 (drawing on Shockley's work) by John Atalla, it was at first a novelty. RCA introduced a series of logic chips using FETs, but they were used only in specialty, low-power applications due to their low speed. Everyone knew the technology would never replace the much more useful junction transistor.

Now, of course, FETs are the basis of the digital revolution. The speed problems were solved, and their extremely low power requirements made it possible to pack millions on to a single IC.

A three tube radio didn't generate all that much heat, but group 18,000 into a computer and the air conditioning system becomes a significant problem. The same holds true for all kinds of transistors: a single IC with hundreds of millions low-power FETs will thermally self-destruct. So, ironically once again, vendors are grappling with different technologies like multicore to get better MIPs per milliwatt ratios.

At the same time Morse was perfecting the telegraph, the first real electrical system, Rudolf Clausius codified the basic idea of the second law of thermodynamics, which has haunted the entire history of electronics. Multicore may or may not be a solution to MIPs/mW today, but put huge numbers of low-power CPUs on a single core and Clasius's law will surface yet again. I suspect that long before the transistor's 100th birthday entirely novel, low-entropy technologies will be invented. And those, too, will fall to inexorable thermal scaling problems.

Jack Ganssle () is a lecturer and consultant specializing in embedded systems' development issues. For more information about Jack .

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.