Microprocessors change the world
Before the microprocessor, it was absurd to consider adding a computer to a product; now, in general, only the quirky build anything electronic without embedded intelligence.
I have always wished that my computer would be as easy to use as my telephone. My wish has come true. I no longer know how to use my telephone.
Everyone knows how Intel invented the computer on a chip in 1971, introducing the 4004 in an ad in a November issue of Electronic News. But everyone might be wrong.
TI filed for a patent for a "computing systems CPU" on August 31 of that same year. It was awarded in 1973 and eventually Intel had to pay licensing fees. It's not clear when they had a functioning version of the TMS1000, but at the time TI engineers thought little of the 4004, dismissing it as "just a calculator chip" since it had been targeted to Busicom's calculators. Ironically the HP-35 calculator later used a version of the TMS1000.
But the history is even murkier. The existence of the Colossus machine was secret for almost three decades after the war, so ENIAC was incorrectly credited with being the first useful electronic digital computer. A similar parallel haunts the first microprocessor.
Grumman had contracted with Garrett AiResearch to build a chipset for the F-14A's Central Air Data Computer. Parts were delivered in 1970, and not a few historians credit the six chips comprising the MP944 as the first microprocessor. But the chips were secret until they were declassified in 1998. Others argue that the multichip MP944 shouldn't get priority over the 4004, as the latter's entire CPU did fit into a single bit of silicon.
In 1969 Four-Phase Systems built the 24-bit AL1, which used multiple chips segmented into 8-bit hunks, not unlike a bit-slice processor. In a patent dispute a quarter century later proof was presented that one could implement a complete 8-bit microprocessor using just one of these chips. The battle was settled out of court, which did not settle the issue of the first micro.
Then there's Pico Electronics in Glenrothes, Scotland, which partnered with General Instruments (whose processor products were later spun off into Microchip) to build a calculator chip called the PICO1. That part reputedly debuted in 1970, and had the CPU as well as ROM and RAM on a single chip.
Clearly the microprocessor was an idea whose time had come.
Japanese company Busicom wanted Intel to produce a dozen chips that would power a new printing calculator, but Intel was a memory company. Ted Hoff realized that a design with a general-purpose processor would consume gobs of RAM and ROM. Thus the 4004 was born.
It was a four-bit machine packing 2,300 transistors into a 16-pin package. Why 16 pins? Because that was the only package Intel could produce at the time. Today fabrication folk are wrestling with the 22-nanometer process node. The 4004 used 10,000-nm geometry. The chip itself cost about $1,100 in today's dollars, or about half a buck per transistor. CompUSA currently lists some netbooks for about $200, or around 10 microcents per transistor. And that's ignoring the keyboard, display, 250-GB hard disk, and all the other components and software that go with the netbook.
Though Busicom did sell some 100,000 4004-powered calculators, the part's real legacy was the birth of the age of embedded systems and the dawn of a new era of electronic design. Before the microprocessor, it was absurd to consider adding a computer to a product; now, in general, only the quirky build anything electronic without embedded intelligence.
Happy Birthday, 4004
Jack Ganssle's series in honor of the 40th anniversary of the 4004 microprocessor.
Part 1: The microprocessor at 40--The birth of electronics
The 4004 spawned the age of ubiquitous and cheap computing.
Part 2: From light bulbs to computers
From Patent 307,031 to a computer laden with 100,000 vacuum tubes, these milestones in first 70 years of electronics made the MCU possible.
Part 3: The semiconductor revolution
In part 3 of Jack's series honoring the 40th anniversary of the microprocessor, the minis create a new niche—the embedded system.
Part 4: Microprocessors change the world
In part 4 of Jack's series honoring the 40th anniversary of the microprocessor, now embedded systems are everywhere.
At first even Intel didn't understand the new age they had created. In 1952 Harold Aiken figured a half-dozen mainframes would be all the country needed, and in 1971 Intel's marketing people estimated total demand for embedded micros at 2,000 chips per year. Federico Faggin used one in the 4004's production tester, which was perhaps the first commercial embedded system. About the same time the company built the first EPROM and it wasn't long before they slapped a microprocessor into the EPROM burners. It quickly became clear that these chips might have some use after all. Indeed, Ted Hoff had one of his engineers build a video game—Space War—using the four-bitter, though management felt it was a goofy application with no market.
In parallel with the 4004's development, Intel was working with Datapoint on a computer, and in early 1970, Ted Hoff and Stanley Mazor started work on what would become the 8008 processor.
1970 was not a good year for technology; as the Apollo program wound, down many engineers lost their jobs, some pumping gas to keep the families fed. (Before microprocessors automated the pumps, gas stations had legions of attendants who filled the tank and checked the oil. They even washed windows.) Datapoint was struggling, and eventually dropped Intel's design.
In April, 1972, just months after releasing the 4004, Intel announced the 8008. It had 3,500 transistors and cost $650 in 2011 dollars. This 18-pin part was also constrained by the packages the company knew how to build, so it multiplexed data and addresses over the same connections.
Typical development platforms were an Intellec 8 (a general-purpose 8008-based computer) connected to a TTY. One would laboriously put a tiny bootloader into memory by toggling front-panel switches. That would suck in a better loader from the TTY's 10 character-per-second paper tape reader. Then, read the editor and start typing code. Punch a source tape, read in the assembler. That read the source code in three passes before it spit out an object tape. Load the linker, again through the tape reader. Load the object tapes, and finally the linker punched a binary. It took us three days to assemble and link a program that netted 4KB of binary. Needless to say, debugging meant patching in binary instructions with only a very occasional rebuild.
The world had changed. Where I worked we had been building a huge instrument that had an embedded minicomputer. The 8008 version was a tenth the price, a tenth the size, and had a market hundreds of times bigger.