Cores, Cards, and Tubes
Jack G. Ganssle
Our author traces the predecessors of today's embedded systems back to his long-haired, havoc-wreaking, hippie days.
Last month I reminisced about the early days of the microprocessor, about building embedded systems when edit/assemble/link times were measured in days rather than seconds.
But embedded systems are just the latest incarnation of electronic digital computing, a field that arguably goes back for more than half a century. Long before 32-bit CPUs sporting millions of transistors crammed into a chip the size of a fingernail existed, a huge computer industry built on much less friendly technology thrived.
The Univac 1108 I grew up on was a mainframe of gigantic proportions: the computer room covered a good quarter acre, holding the CPU itself plus disks, drum memory, tape drives, and printers.
Drum memory? A minivan-sized box contained two counter-rotating six-foot-long drums, each coated with ferromagnetic material. Hundreds of recording heads darted across the surface, allowing this beast to store a few tens of megabytes of data. Head crashes were so common that the school used two redundant drum subsystems, one operating, and one usually under repair. In fact, two Univac servicemen worked full-time in this computer room, in a reasonably successful effort to keep the machine mostly running.
Tape drives? Sixties-era sci-fi movies always used a bank of whirring tape drives to indicate the ultimate in high-tech. A dozen refrigerator-sized tape drives fed information into our 1108. When drums and disks were measured in tens of megs, tape was essentially the only form of personal data storage. Each reel held 50Mbits — a vast amount of data in those days — and were stored in a tape library just off the computer room. When you needed your data, you'd send a message to the operator who located and loaded the proper spool. Needless to say, access times were measured in minutes to hours.
Our 1108 was a dual-processor model, with one unit dedicated to managing all of these disks, drums, tape, and printers. No peripheral had built-in computers in those days. The other CPU ran users' programs as well as the OS. Each processor was surprisingly small considering the vast enterprise surrounding them, being about the size of two fridges.
Open the back door of a CPU and you'd be confronted by an ocean of blue wires. Univacs had hundreds of circuit boards plugged into a backplane; all of the boards were interconnected by wirewrap wires. Need to change an interrupt level on a peripheral? Get out the wire wrap gun and start making mods. These were 36-bit machines. Their peculiar wordsize came from Sperry's adoption of a pre-ASCII notation called Fielddata, which used six bits to store letters, numbers, and punctuation. Printers produced only uppercase output, so the six-bit (64 symbols) limit wasn't much of a hindrance.
Input devices were limited to a bank of teletypes and punched cards — both also restricted to uppercase. I talked about teletypes in the context of microprocessors last month. In the mainframe world the overwhelming aspect of these beasts was their noise. A jet engine couldn't compete for attention in a roomful of chattering teletypes. Unless, of course, you were foolish enough to hack into the accounting system and take over the entire computer, in which case all of the other machines fell silent while just your teletype continued to chatter, with 50 pairs of eyes staring accusingly at you. But that's another story.
Most mainframe computing used punched cards to store code and data. The programmer would tediously — oh so tediously! — enter the program on a card punch, storing one line of Fortran or Algol per card. Two thousand cards filled a card box. Big programs might take a half dozen or more boxes of cards, quite a tower of disaster when the poor student tripped, spilling his precious intellectual efforts all over the wet parking lot.
|The 1108 was completely transistorized. Here, in the first month of 2000, it's awfully hard to imagine that people once designed with individual transistors instead of clumps measured in millions.|
After preparing the cards, the programmer carried the stack to the Computer Operator, a position of such power and glory it cannot be imagined today. These were the chosen few who had hands-on access to the machine. When the poor supplicant submitted a card deck, the Computer Operator grandly gave an estimated turn around time, typically 24 to 48 hours. After a day or more our humble programmer returned for his printout and the often mangled card deck. Imagine his frustration when stupid mistakes mandated another run, burning up another day or two as the project deadline drew ever nearer.
The 1108 was completely transistorized. Here, in the first month of 2000, it's awfully hard to imagine that people once designed with individual transistors instead of clumps measured in millions. Old timers will remember the first transistorized radios from Japan, with marketing campaigns loudly proclaiming “six transistor receiver!” Six transistors! Can you imagine building anything with so few active elements?
But transistors were an astonishing advance over the technology of the '40s to the '60s, that of vacuum tubes (“valves” to our European friends). Though tube-based computers predate all of my experiences, as a teenaged ham radio enthusiast I often built radios, transmitters, and “hi-fi” equipment using tubes. All computers from the Eniac until the late '50s used these devices exclusively.
The vacuum tube was actually accidentally invented by Edison, though he didn't understand the importance of his experiments and discarded the technology. Fleming and others understood that a heated filament emitted a stream of electrons that could be regulated by inserting a grid of wires between the electron source and a receiving end. The quantum complexities that govern the behavior of today's devices were refreshingly absent on the macroscopic scale engineers used during the tube era. They would fire a stream of electrons from the filament towards the plate through a mesh “grid.” A small negative charge on the grid would repel the negatively charged stream, effectively gating (or “valving”) the flow, and thus creating an amplifier, logic element, or oscillator.
Compared to today's submicron geometries, the scale of these devices was vast. By the '60s, twin triodes were the big thing, units containing (count 'em) two active elements, equivalent to two transistors. The 12AX7, a twin triode of the era, was almost an inch in diameter and about two inches long, with eight leads sticking out the bottom. We could easily pack a hundred billion transistors into this form factor today.
Though we may feel smugly superior in that we now deal with logic blocks instead of individual transistors, in fact blocks were always a part of the electronics landscape. DEC created the “Flip-Chip,” standard circuit boards that might each contain a flip-flop. They built the early minicomputers out of hundreds of these boards.
Before DEC, though, vacuum tube engineers, too, created logic blocks. I once had a large box of surplus logic elements. A single flip flop was a tower with a tube on top, electronics below, and under that a standard tube-type connector. It is possible, after all, to build a simple flip flop from only two active elements. Entire computers were built of thousands of these large (an inch and a quarter square and about five inches high), hot (because of the filaments), high-voltage (most tube circuits ran at about 300V to propel the electrons across a centimeter or so of vacuum) devices.
Perhaps one of the most profound differences between the digital and analog designs of the tube era and those of today is in the use of active elements. Tubes were expensive, power hungry, large beasts with relatively short lifetimes. All electronic design was an exercise in optimization. “Hi-fi” amplifiers typically had a few tubes. TV sets might use eight. Can you imagine building anything with less than a few thousand transistors today? The cafeteria-sized Eniac had 18,000, which, though a vast number, compares pitifully with today's transistor counts. How many hundreds of millions of transistors are in your 3-lb. laptop?
The first number in a tube designator (like the 12 in 12AX7) specified the filament voltage. The most common were 12.6V and 6.3V, which became the butt of a joke in the '70s when Signetics published an April 1 data sheet for their highly integrated Write Only Memory (on-line at www.ganssle.com/misc/wom.html). A 6.3V power supply was required, as a footnote said, “for the filaments,” a joke lost on the post-tube generation.
|The officers were unaware of the pending microprocessor revolution, and were equally disbelieving about my story about “a computer on a chip.” They saw me as the vanguard of an invasion of commies bearing death-ray components.|
Before RAM, before EPROM, OTP, flash, EEPROM, or any of the other zero-cost, high-density memory arrays we now take for granted, computers stored data and programs in core.
As the microprocessor came of age, my engineering career was split between working on micro-based embedded systems and similar products that “embedded” minicomputers, either PDP-11s from DEC or Novas from Data General.
Throughout this period, core was the only random access read/write memory in common use. It wasn't till the very late '60s that even the smallest MOS memory chips became available.
Each core is a ferrite bead, perhaps the size of a small “o” on this page. Four wires run through the center of each core, four wires tediously strung, by hand, by poor workers who, no doubt, worked for a pittance.
Cores are tiny magnets, each remembering just one bit of information. The trick is to flip the magnetic field of the cores — one direction is a “one;” the opposite field indicates a “zero.”
As we know from basic electromagnetics, a changing voltage creates a magnetic field, just as a change in a magnetic field induces a voltage. The wires running inside of the ferrite beads create the fields that flip the direction of magnetization, writing a zero or a one in the process. They also sense the magnetic field so the computer can read the stored data.
Two of the wires organize the core into an X-Y matrix. The core plane is an array of vertical and horizontal wires with a bead at each intersecting node. Run 50% of the power needed to flip a bit down each wire — at the intersection there's all that's needed to flip just that one bit. What a simple addressing scheme! As the bit changes state, it induces a positive or negative pulse in a third wire that runs through all of the cores in a plane. Sensitive amplifiers convert the positive or negative signal to a corresponding zero or one.Since the amplifiers detect nothing unless the core changes state, reads are destructive. You've got to toggle the bit, and then write the data back in on each and every read cycle. It sounds terribly primitive until you think about the awful things we do to keep modern DRAM alive.
Before microprocessors quite caught on, the instrumentation company where I worked embedded Data General Nova minicomputers into products. The Nova used core arranged in a 32K x 16 array. The memory was nonvolatile, remembering even when no power is applied.
We regularly left the Nova's boot loader in a small section of core. My fingers are still callused from flipping those toggle switches tens of thousands of times, jamming the binary boot loader into core each time a program crashed so badly it overwrote these instructions.
For some reason these Nova memories suffered a variety of ills. Core was expensive — around $2,000 for 32,000 words, a lot of money in 1974 dollars. A local shop repaired damaged memory, somehow restringing cores as needed, and tuning the sense amplifiers and drive electronics.
As we worked through these reliability issues, my boss — who was the best digital designer I've ever met — told how some military and space projects actually employed core as logic devices. In a former job he designed systems composed of strands of core strung together in odd patterns to create computational elements.
One of my holy relics is a 3-lb., 13,000-bit core array acquired in 1971. A few days after my high school graduation I hitchhiked with a pal to Boston (those were kinder, gentler days) to find treasures in the disorganized depths of a surplus shop.
Was it our long hair? Maybe the fact that we were warned three times to get off the New Jersey Turnpike had something to do with it. Somehow Gary and I found ourselves in a New Jersey jail cell, busted for hitching. The police, expecting to find a stash of drugs in our backpacks, were surprised to discover instead my 13,000 bits of core.
“What's this?” the chief growled. I timidly tried to convince him it was computer memory. These were the days when computers cost millions and were tended by an army of white-robed technicians, not hitchhiking long-hairs. All of the cops looked dubious, but could find nothing to dispute my story. They eventually let us go, me still clutching the memory which today sits on my desk.
A few years later I experienced an eerie echo of this incident when I lived in a VW microbus. Coming back from Canada into a remote Maine town, the local constabulary, sure I was running contraband, stripped the van. Must have been the long hair. They found a 6501 — the first low-cost microprocessor chip. MOS Technology amazed the electronics world when they released this part — the predecessor of the 6502 of Apple II fame — for only $20. I just had to have one, though I just tossed the part in the glove compartment. The officers were unaware of the pending microprocessor revolution, and were equally disbelieving about my story about “a computer on a chip.” They saw me as the vanguard of an invasion of commies bearing death-ray components.
The computer industry is still quite as exciting as it was in those earlier days, though at least now the excitement never includes a view from within a jail cell. Well, at least not recently.
Jack G. Ganssle is a lecturer and consultant on embedded development issues. He conducts seminars on embedded systems and helps companies with their embedded challenges. He founded two companies specializing in embedded systems. Contact him at .