An embedded life - Embedded.com

An embedded life

Jack takes a long, nostalgic, and very personal look back to the early days of embedded systems development.

If there's a dweeb gene, I got a double dose. Computers fascinated me from early childhood. Yet in the '60s, none of us had access to these horrendously expensive machines. Dweebs built ham radios and vacuum tube “hi-fi” gear instead. I had a tiny lab in the basement filled with surplus electronic equipment that I constantly reconstituted into new devices.

One “personal computer” existed: Heathkit's EC-1 analog computer was a $200 monster comprising just nine vacuum tube op amps (picture at www.heathkit-museum.com/computers/ec-1.shtml). Designed for simulating differential equations, users programmed it by wiring the amplifiers together using patchcords, resistors, and capacitors. My lust for it remained unsatiated due to the impossible cost.

The early years
At age 13, I was expected to spend Saturdays doing free janitorial duties for my dad's underfunded startup. The usual grumbling ceased when I discovered the other half of the deal: components swept from the engineering lab's floor were mine. Even better, those engineers were slobs who dribbled parts faster than infants lose their rattles. Resistors, transistors, and even digital integrated circuits started filling the parts bins at home, and I spent far too many hours wiring all sorts of circuits, which taught me the nature of each device.

A lucky break at 16 landed a $1.60- an-hour job as an electronics technician. While Neil Armstrong frolicked across the Sea of Tranquility, we built ground support equipment for Apollo and other programs. Exciting? You bet. Feeling flush with cash, I paid $4 an hour for dial-up access to a Honeywell mainframe, my first exposure to any sort of computer. An ASR-33 Teletype at a friend's school gave us time-shared Fortran at 110 baud.

But surely there was a way to get my own computer. All attempts to build one failed, doomed by little money and less knowledge. So little that most of these designs accepted bastardized Fortran as machine language since I'd never heard of assembly language, let alone machine code. And any sort of memory was simply not available to a high school kid.

In my senior year, a friend and I hitchhiked from Washington D.C. to Boston to visit an electronics surplus store. There I bought a 13,000-bit core memory box. No drivers, no electronics, just 26 planes of 512 bits arrayed in X-Y matrices.

Was it our long hair? Maybe the three unheeded warnings to get off the New Jersey Turnpike contributed. Gary and I found ourselves in a New Jersey jail cell, nailed for hitching. The police were surprised to discover the core instead of drugs in our backpacks. “What's this?” the chief growled. I timidly tried to convince him it was computer memory, the last thing he expected to find on a pair of hippie-freaks. They eventually let us go, me still clutching the core box. Thirty-two years later, it sits on my desk, a reminder of both long lost youth and the breathless pace of technology.

I became a reluctant college student, reluctant only until I discovered the university's Univac 1108, a $10 million mainframe packing about as much power as one of today's ubiquitous calculators. Class attendance became a last option before exams, which were pushed to the bottom of the priority list after work and all-night computing. As I learned the secrets of assembly language and operating-system vulnerabilities, the limits of the usual $50 per semester mainframe account were easily transcended.

The 1108 ran at 1.3MHz, had 768K (not megs) of core memory, and used 2-ton 6-foot-long spinning drums, each storing 45Mbits. Punched cards were the user interface of that age. The machine had a ridiculous instruction set lacking even a stack, an awkward 36-bit word, and the annoying habit of crashing more often than Windows 2.0, especially when final projects were due. But I loved that computer, and wasted absurd amounts of time developing tools and applications for it.

Computer on a chip
To stay abreast of the latest in electronics, I wrangled a free subscription to EDN, probably offered due to my exaggerating my job title. Pathetically, it was more fascinating to me than the girlie magazines my friends hoarded. In 1971, the magazine announced the creation of a computer on a chip. Chips never had more than a few hundred transistors—was this a hoax?

The “computer on a chip” was more marketing hype than reality, as the 4004 required a tremendous amount of external support circuitry. But it was a huge advance in computing. Since no dreamer wanted a 4-bitter as a home PC, the only market it targeted was the as-yet unnamed and almost nonexistent embedded systems market. Intel invented not only the microprocessor, but practically the entire notion of cheap embedded systems. Yet according to The Microprocessor: A Biography (Michael S. Malone, 1995, Springer-Verlag, NY) the company didn't at first understand the implications of this new innovation—they were afraid the yearly microprocessor market would remain at a mere 2,000 chips.

My first computer
Meanwhile I'd finally figured out the secrets of computers and built a 12-bit machine that actually worked. It used hundreds of transistor-transistor logic (TTL) ICs—no microprocessor—wired on vectorboard, using brightly colored telephone cable soldered directly to each chip's pins. I couldn't afford sockets or wire-wrap. My Heathkit scope helped troubleshoot the logic, but since the computer was fully static it could even run at a fraction of a hertz. A simple voltmeter could follow bits flipping. Once the logic worked, I cranked up the clock to 100kHz or even 1MHz, depending on how adventurous I felt. Memory was 768 words of semiconductor RAM (36 of Intel's 1101 256-bit static RAM chips) and 256 words of 1702A EPROM.

Back then, $50 procured a World War II vintage ASR-15 Teletype capable of 50 baud communications (pictures of ASR-15 are at www.railroad-signaling.com/tty/tty.html). Powered by a half-horse motor it must have weighed 200 pounds. The noise that beast made was indescribable. The neighbors in the apartment below must have hated it. It spoke BAUDOT, sort of a 5-bit version of ASCII. Eventually I managed to get a bootloader working in EPROM (using a bit-banging UART), and then wrote a monitor program that was loaded from paper tape.

1972: the first 8-bitter
By the time my machine worked, it was utterly obsolete. In 1972, Intel released the first 8-bit microprocessor, the 8008. Like its predecessor, this part was at sea unless surrounded by lots of support circuitry. But the device, in an 18-pin package, offered what seemed like a vast 16KB address space. It required three power supplies (+12, +5, and -9) and two 12V clocks.

Panic set in at the company where I still worked as a technician. The 8008 made decent amounts of computing available for a reasonable price. It meant we could build a new kind of device, a machine that analyzed oil coatings on nylon, which needed far more intelligence than possible via hardwired logic. The problem? None of the engineers knew how to program.

A consultant saved the day, writing thousands of lines of Programming Language/ Microcomputers (PL/M) and creating one of the first production embedded systems. But the consultant was expensive, and the company was broke. Work had started on another product, one that used infrared light to measure the amount of protein in wheat. Someone figured out that I knew assembly language and suddenly promoted me to engineer. Classes, on those rare occasions I showed up, seemed utterly irrelevant compared to the thrill of messing with the machines.

The first of these grain analyzers used an 8008 with 4,096 words of 1702A EPROMs. Four kilobytes doesn't sound like much, but required sixteen—sixteen!—256-byte chips occupying an entire printed circuit board (PCB). The CPU ran at a blistering 800kHz, the device's max rate, and just about the same speed as the by now forgotten Univac 1108, which was noisily reminding me about upcoming finals.

Intel provided a rudimentary development system called the Intellec 8 (online.sfsu.edu/~hl/c.Intellec8.html), which had a single hardware breakpoint set by front-panel switches. The Intellec 8's only I/O device was an ASR-33 Teletype (www.columbia.edu/acis/history/teletype.html), a 10-character-per-second unit incorporating a paper tape reader and punch. No disks, no mass storage, no nuthin'.

To boot the Intellec 8, we'd manually load a jump instruction into memory via front panel switches and press “run.” A bootloader then read the editor in from paper tape. A crude very GUI-less editor let us enter our source code and correct mistyping (lots—the ASR-33 was hardly finger-friendly). The final edited source was punched to paper tape. These tools were all buggy; once it output my hours of typing completely reversed, printing the last character first and the first last. I started keeping a bottle of Mylanta at hand.

The cruddy tools gave us plenty of incentive to modularize, so even a small program comprised many separate source tapes.

The 10 cps tape reader needed a couple of hours to load the assembler, which then read each source tape three times, once for each of its passes. Often—oh how often—the mechanical reader missed a zero or one (the hanging chad isn't a new phenomenon) so one of the three reads wouldn't be identical to the others. “Phase error”—start all over! If fate smiled and all the stars of the Zodiac aligned, the machine punched a binary relocatable tape.

Next we'd load the linker (another hour or two), feed the binary tapes twice each, and if everything worked perfectly, the ASR-33 spat out a final absolute binary image of our program.

Our 4KB program probably consisted of just 5,000 source lines, but a complete build took three days.

Needless to say, we rarely reassembled. Each morning we'd load the current binary image and start debugging. Errors were fixed by changing the machine code of the offending instructions. Sometimes the fix was shorter than the code in memory so we'd drop new code in place of the old, padding unused bytes with NOP s. Longer patches went into unused RAM, accessed via a JMP plopped on top of the offending code. Careful notes on the listing logged each change so a later edit could make the source match the code. A day of debugging might result in quite a few patches, preserved by punching a tape of the current binary image. The next day we'd load that tape and continue debugging.

That 4KB of code did very sophisticated floating point math, including noise reduction algorithms, least squares curve fits, and much more. It was a huge success that led to many more embedded products.

1974 and the 8080
Intel's 1974 introduction of the 8080 wasn't much of a surprise. The shocker was an article in Popular Electronics about a home computer called the MITS Altair 8800 (www.vintage-computer.com/altair8800.shtml), an 8080 machine that sold for $400 in kit form. Not much memory came with it for that price, but since the processor alone sold for $400, engineers couldn't imagine how MITS (Micro Instrumentation and Telemetry Systems) pulled off this miracle. At the time we imagined they used reject parts, but learned later the company bought CPUs in high volume for $75 each.

The next generation of our product used an 8080, so we bought a pair of Altairs as development platforms. With two machines we could cannibalize boards to keep one working most of the time. The Altair was designed to meet a price, evidenced by poor PCB layout

and unreliable DRAM circuitry. Constant crashes and data loss were the norm. Tagamet became available about this time and prescriptions for the drug littered the lab benches.

After much complaining and more loss of time, the boss relented and spent $20,000 (about $68,000 in 2003 dollars) on Intel's new MDS-800. It was a general purpose 8080 computer that included an editor, assembler, linker, various other software tools, and the first in-circuit emulator (ICE). A huge device, it became familiarly known in the industry as the “Blue Box.” Mass storage consisted of two 8-inch floppy disks holding about 80KB each.

Though our productivity soared, development times grew longer. The 8080's 64KB address space removed program size constraints, so customers and marketing started demanding ever-more features. The programs swelled to 16KB, 32KB, and beyond. EPROM sizes grew in step so there were no barriers to code bloat except engineering time; even then managers chanted the mantra “it's only a software change.”

Language barriers
Everything was still written in assembly language. We had experimented with Basic from a tiny outfit named Microsoft and also with their later Fortran. Neither was adequate for real-time embedded work. No C compilers existed for micros at the time. I created an Algol-like pseudo code implemented in assembly macros, but the MDS-800 took over an hour to translate a lousy 100-line program.

But assembly language was fun when we could find programmers; there were never enough. Now the company was in a growth spurt and every product had an embedded processor. As is normal in times of turmoil, the Peter Principle kicked in and I found myself in charge of all digital design and firmware. Our growing staff barely kept pace with the demand for new products. We had our own offices and lab; those and our youth, the music, and the odd hours divided us from the older analog folks. At least they seemed old; in retrospect none were past 30.

The beat goes on
Processors were still expensive. CPU chips cost hundreds of dollars. In 1975, MOS Technology introduced the 6502 at the astonishing price of $25. That sparked the Jobs/Wozniak whiz kids to develop the Apple computer. I bought one of these chips thinking to make a home machine. But for over a year home was a VW Microbus, usually parked outside the office or on a rest stop on Route 95. After a Canadian vacation I reentered the U.S. at a remote Maine town, hoping to take in the sights of the Great North Woods. In an incident eerily echoing my New Jersey Turnpike bust, customs agents, seeing the long hair and the Microbus, stripped the van. They found the 6502 in the glove compartment. Once again the officers were unaware of the pending microprocessor revolution and disbelieved my story about a computer on a chip. They didn't know what that 40-pin DIP was, but it sure looked like contraband.

We followed the evolution of technology, moving to the 8085 when it came out, and then, looking for much more horsepower to run the graphical displays our customers demanded, designed a system using AMD's 2901 bit-slice components. These were 4-bit elements strung together to create processors of arbitrary word length with custom instruction sets. A Signetics 8X300, a wacky DSP-like processor using

16-bit instructions but 8-bit data words, sequenced interactions between the bit-slice device and a Nova minicomputer.

Nova
We still flirted with minicomputers, searching on some products for more horsepower than possible with a micro. And some of these devices had to run Nova (www.simulogics.com/museum/N1200_1.JPG) legacy code. This Data General 16-bitter offered decent performance for much less than the more popular and wonderfully orthogonal PDP-11.

The original Nova 1200 moved data through a single 4-bit arithmetic logic unit, using four cycles to do any arithmetic operation. It's hard to imagine in this day of free transistors that there was a time when hardware was expensive. Later models used full 16-bit data paths.

All had nonvolatile core memory. Data General was slow to provide ROMed boot code, so users were expected to enter the loader via the front panel switches. We regularly left the Nova's bootloader in a small section of core. My fingers are still callused from flipping those toggle switches tens of thousands of times, jamming the same 30 instructions into core whenever a program crashed so badly it overwrote the loader. At first all of the engineers got a kick out of starting up the Nova, flipping switches like the captain of some exotic space ship. Later we learned to hate that front panel. It's time to enter those instructions AGAIN! Acid indigestion gave way to a full ulcer.

Sea change
Back then, we put in insane hours. Hundred-hour weeks for 40-hours pay wasn't unusual, but there was little complaining; the technology was so fascinating. But by 1976, I was living on an old wooden sailboat and was rich—or so it seemed. With the amazing sum of $1,500 in the bank, I quit to sail around the world.

And sank a year later. Back to the same job, but now I'd felt freedom and was itching for something more exciting. I resigned and started a consulting outfit with a good friend. For two years, we built custom embedded systems for a variety of customers. A few stick out in memory—like the security system for the White House, which used over a hundred tightly-coupled 8-bit CPUs. When the contract ended, I lost my White House pass the same day as Ollie North, though with much less fanfare.

We built a variety of deep ocean probes that measured oxygen, temperature, salinity, currents, and other parameters. These had to run for months to years on small batteries, so we used RCA's 1802, at the time the only CMOS processor. It was a terrible chip lacking even a call instruction, but sipped so sparingly from Vcc that it hardly needed a power switch. Later we built a system that also used an 1802 to measure how fruit ripens while being shipped across oceans. That project was doubly rewarding when stevedores in Rotterdam dropped a shipping container on the device. The replacement job paid the bills for another month or two.

A 12-ton gauge that moved on railroad tracks as it measured the thickness of white-hot steel used a PDP-11 minicomputer interfaced to various 8-bit microprocessors. The plant's house-sized main motor reversed direction every few seconds to run the steel back and forth under rollers, tossing staggering amounts of radio frequency interference into the air. Poorly designed cabling could quite literally explode from coupled electric and magnetic fields. We learned all about shielding, differential transmission, and building smart software to ignore transients.

In those two years, my partner and I starved, never having learned the art of properly estimating the cost of a job. Both of us agreed the friendship was more important than the company, so we sold the assets and each started our own outfits. That proved wise as we're still very close, and today our kids are best friends.

I'd had it with consulting. With every project the consultant more or less starts from scratch. A product, though—that seemed the ticket. Design it once and sell the same thing forever. But cash was scarce so I consulted during the day and wrote proprietary software at night.

The result was MTBASIC, a Basic compiler for the Z80 that supported multitasking. For a development platform I built a Z80 machine that ran CP/M using a 40-character-wide TV monitor and a single floppy disk whose controller was a half-PCB of discrete logic rather than the fancy but expensive floppy disk controller chips of the time.

The compiler, targeted at embedded apps, was interactive like an interpreter yet produced native compiled code that could be ROMed. Compile times were nearly instantaneous and the generated code even faster. Built-in windowing and a host of other features drove the source to over 30,000 lines of assembly. But this compiler was the cutest code I ever wrote. Working out of the house I managed to sell some 10,000 copies for $30 each over the next few years.

1981, the PC revolution begins
In 1981, IBM introduced the PC. Using a 4.77MHz 8088 and limited to 640KB of RAM, this machine caused the entire world to take notice of the microprocessor industry. Though plenty of “personal computers” already existed, these were mostly CP/M-based Z80 models that required enormous techie competence and patience. The one exception was the Apple, but that only slowly made its way into the business world.

Though a very healthy and dynamic embedded systems industry existed, then as now it was mostly invisible to the average Joe. Few smart consumer products existed, so Joe's perception of computers was still the whirling tape drives in sci-fi programs or the ominous evil of HAL in the movie 2001: A Space Odyssey. But the IBM PC brought computers into the mainstream. Normal people could own and master these machines. Or so they're still telling us.

I bought an early PC. Unbelievably, floppies were optional. Most customers used cassette tapes. My two floppy model with 256KB RAM cost $7,000. I ported MTBASIC to the PC, recoding it in 8088 assembly, and found a willing market.

The top floor of the house was devoted entirely to offices now, the main level for storage. We moved into the basement. Neighbors complained about daily delivery trucks. Ceiling-high stacks of manuals and pallets of shipping boxes in the living room made entertaining challenging, or at least quirky.

Despite brisk sales, advertising ate all the profits. Still consulting, a government customer needed a battery-operated data-collection system. National's NSC800 was ideal, but since tools didn't exist for the CPU, it seemed natural to make a simple little ICE, which worked surprisingly well.

Eventually that Eureka moment hit—why not sell these emulators? Since the NSC800 was so similar to the Z80 and 8085, it was a snap to expand the product line.

Some customers built astonishing control systems with MTBASIC. But the C language slowly gained acceptance in the embedded systems space and Microsoft's various flavors of Basic ate away at our nonembedded market. The PC killed off CP/M, and made that version of the compiler obsolete. As the product's sales slipped, though, ICE revenues more than compensated.

The ICEman
The ICE hardware design was simple, using only 17 integrated circuits. The emulation processor was also the ICE control CPU. A bit-banging UART made the software more complex but saved a chip or two.

The firmware was a nightmare and also more fun than you can imagine. On a breakpoint, the CPU had to store the entire context of the executing program. Since I'd made the ridiculous decision to use no target-system resources, when transitioning through a breakpoint the hardware had to swap in local ICE RAM and turn off user memory. State-saving PUSHes stashed information wherever the user's stack pointer had been—anywhere in the ICE's address space. Hardware stripped off some of the address bits to ensure the data went into the emulator's RAM, but the need to minimize chip count meant it was usual for the writes to wipe out local variables used by the ICE. Reconstructing the data while correctly preserving the target context was quite a challenge. It was a cool design, though I probably should have used more hardware to simplify the code.

We sold the units for $595 each. Though parts and labor only ran about $100, advertising and overhead burned cash at a scary rate. Even so, the business grew. Forced out of the house by space needs, we rented a facility, the first of many as growth demanded ever-more square footage.

Over time I learned the basic law of the embedded systems tool market: keep prices high. Every application is truly unique, so customer support is hugely expensive. Support costs are about the same for a $600 or $6,000 tool. That's why today a simple BDM debugger, which might use parts worth only a few dollars, can cost thousands. And why Linux is free but plenty of outfits will happily drain your fortunes to help get it going. So we developed much more powerful units for prices up to $10,000, keeping hardware and production costs around $1,000.

Those were the glory days of emulators, when chip companies funded new ICE designs and customer demand was high. Our product line grew to include many 8- and 16-bit processors. The emulators themselves became hugely complicated, stuffed with boards crammed with very high-speed logic, memory, FPGAs, and PLDs. The 4MHz processors accelerated to 8, then 12, and to 40MHz or more. Even a few nanosecond delay imposed by a single chip ate an unacceptable 20% of the bus cycle; thus more exotic technology placed closer to the customer's target CPU socket became the norm. Logic design morphed into high-speed RF work. Maxwell's laws, at first only vaguely remembered from those oft-skipped college electromagnetics classes, were now our divine guidance. Firmware content skyrocketed. We used plenty of C, yet the emulators needed vastly more assembly than most products, as plenty of low-level bit twiddling was required.

Worn down by 70-hour weeks and the toll on my personal life I sold that company in 1996. Yet in many ways the tool business is the best of the embedded world. I met so many fascinating developers and poked deeply into their intriguing projects. Some used 8-bit processors to control fleets of aircraft while others had 32-bitters loafing along handling very slow inputs. A few ran for years off two AAs while others sucked from a 5V firehose. Applications varied from the absurd to the most noble imaginable.

Future imperfect
After 30 years in this industry, I despair at times for its future. How can mere humans cope with million-line-plus programs? Is firmware quality an oxymoron? Will engineering jobs migrate at light speed around the world in pursuit of the lowest possible costs?

The embedded revolution is one of the greatest outcomes of a troubled 20th century. No industry is untouched by our work. A flake of silicon reduces power-plant emissions by orders of magnitudes, smart pumps irrigate sustenance farms in Nepal, and electronics in an automatic external defibrillator turn a Good Samaritan into a veritable cardiac surgeon.

In those Dilbert moments when the Mylanta isn't strong enough, if therapy or kicking the dog seem the only hope of getting through another day, take pride in your profession. We have profoundly changed the world, mostly for the better.

And that's a pretty darn good legacy.

Jack G. Ganssle is a lecturer and consultant on embedded development issues. He is conducting a seminar on building better firmware faster on December 5. Contact him at .


I enjoyed your “An Embedded Life” Breakpoints column.My guess is that it will inspire a lot of reminiscence aroundthe real or virtual water cooler.

You left out my first computer. In fact, I've never seen it mentioned in anyone's histories. It was mechanical programmabledigital computer that a kid could afford called Digi-Comp. I think I found it through an advertisement in Popular Mechanics. The year might have been 1962.

It was a four-bit device. Each bit was a red plastic horizontal plate with two defined positions. The four plates were stacked vertically and slid left and right for Set and Clear. The program was a pattern of white plastic tubes inserted selectively over tabs protruding from the edge of plates. Hinged vertical rodsinteracted with the tubes in non-obvious ways to perform simple logic operations when the mechanism was activated by a lever.

Put the white tubes on as shown, and darned if the thing didn'tcount from 0 to 15 in binary, over and over. The device's weak point was the manual. All the sample programs worked, but there were no clues given as to how a person would go about makinga program to do some other task. The assumption was that noone would find interest in such dry, pointless stuff as AND, OR,NOT, etc.

When my mother, performing the universal coming-of-age-ritualof the day, gave away the Lionel trains and tossed my completeset of 1959 baseball cards, she also did something a little more unusual – she threw out my Digi-Comp.

– Tom Lawson


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.