Embedded development, then and nowFrom my picture at the bottom of this page, you might guess that I'm not exactly new to the process of developing embedded systems. This month, I'll be talking about the way the technology and techniques have changed over the years. Don't expect this to be one of those "Gee, I miss the Good Old Days" bits. It's more like a celebration of how far we've come, and how happy I am not to be in those days anymore.
Old-timers who've been there should get a kick out of the look backwards, and you new-timers can thank your lucky stars you don't still have to do things that way.
My first exposure to real-time systems actually predated the advent of microprocessors. NASA and their contractors were developing the hardware and control algorithms for spacecraft attitude control using control moment gyros (CMGs). We were developing a real-time, hardware-in-the-loop computer simulation that could test and simulate the behavior of the CMGs during typical simulated flights. I was developing the digital software; we had other computer programmers who spoke patch-cords, programming the analog side.
The problem with testing a CMG is that you need to see what it's doing in real time. That's even harder to do when the CMG is a virtual one. A CMG generates torque through the changes in angular momentum of a spinning wheel. As the torque is generated, the gyro precessors, and its angular momentum vector (the H-vector) changes. What we most wanted to see was the way the H-vectors changed with time.
The NASA guys had worked out a scheme to visualize the state of the system that was, at the same time, extremely crude and extremely clever. They hooked three output lines from the analog computer to an ordinary x-y oscilloscope. The computer generated signals to display the three H-vectors on the scope, as a 3D image. In 1972, this must surely have been one of the earliest 3D graphics displays ever built.
The system worked very nicely, but it did prompt me to formulate a rule of thumb I've never forgotten: if the extent of your development and testing system involves three or four PhDs sitting around on a concrete floor studying an oscilloscope, you have probably not yet optimized your test facility.
Intel comes to town
In 1974, microprocessors had arrived, and the Intel 8080 was the new kid on the block. I wanted in on the action in the worst way. I joined an existing but tiny company that already had contracts in hand. At the time, our entire development system consisted of an Intel single-board computer (SBC) based on the granddaddy of all microprocessors, the 4004. It boasted a PROM burner, and a line editor and one-line assembler in ROM. Access was via a Teletype ASR33 printing terminal. The only bulk storage was paper tape, accessed via the ASR33's 110-Baud reader/punch. Storage capacity depended on the length of your paper tape. Our configuration management system consisted of a bunch of tapes, tossed into a box.
By the time I arrived on the scene, the ROMs had been updated to support a 4040 assembler.
We had one in-circuit testing tool, which you had to see to believe. We called it the Blue Box. It was not exactly an in-circuit emulator (ICE), but it did give us a view, however darkly, into the computer and its CPU chip. There was no debugger proper, and you couldn't set breakpoints to stop the CPU. What you could do was to set a watchpoint, watching the data whistling through the data bus at the breathtaking rate of 93 kHz. All the addresses were set, and data displayed, in binary using toggle switches and LEDs. Hex was for sissies.
Here's how it worked. Using the toggle switches, you set a memory address into the Blue Box, and launched the computer program. The first time the selected memory location was accessed, the Blue Box grabbed the data on the data bus. After that, the CPU continued on its way, but you had that single byte of data to peruse at your leisure. Think of it as a memory buffer with a one-byte depth.
The main problem with this scheme was that the box could only see data that went out onto the data bus. If you needed to watch the contents of a CPU register, forget it. That data never got put onto the data bus, so you couldn't see it.
Fortunately, I didn't have to deal with this gadget for long. The SBC got updated to support the 4040, which was our target CPU. Extra hardware included an umbilical that clipped onto the CPU. I wouldn't exactly call this an ICE, because you couldn't actually emulate the 4040, only observe and control one. The assembler was still a one-line assembler (no symbolic addresses). And there were no upload/download facilities at all. Instead, we used the original SneakerNet, based on EPROM chips instead of floppies. You burned the assembled program into PROM chips, then plugged those chips into the target machine. Once the PROMs and the CPU umbilical were in place, you could do all the things we usually associate with hex debuggers, including setting breakpoints, single-stepping, and displaying or modifying any data in memory or registers. It may have been a small step for Intel, but it was a giant leap for us.
For PROMs, we used the UV-erasable 1702 EPROMs, each capable of holding an astonishing 256 bytes of data. The 1702 was a big step forward from earlier burn-once-erase-never fusible-link PROMS.
The 1702s had a little quartz window. You could erase the chip by shining UV light through the window.
That's if you had an EPROM eraser. Which we didn't. But we got the same effect the natural way. We'd simply set the EPROMs out on the hood of a car, in the bright Alabama sun. Timing was a matter of cooking until done, which took maybe 30 minutes. Success depended on the weather. No sun, no erasing.
We had a second project that involved an 8080. This one was actually my "baby." It also was for an embedded application, but I didn't have much interaction with the hardware, since it was our customer's hardware, 180 miles away in Atlanta. My job was to develop the software, which included a two-state Kalman filter, the floating-point library to execute it, and the math library to compute it.
"Downloading" was still a matter of PROM-based SneakerNet, only this time it was more like PickupTruckerNet. Or FedExNet.
To support this effort, we bought an Intel Intellec 8 computer. It had no hard drive at all--bulk storage was still via paper tape. But it had lots of RAM and ROM, and supported true symbolic assemblers for both the 8080 and its 4040 brother. It also had a decent line editor, not that far removed from the ed/edlin editors of RSTS, Unix, Multics, CP/M, and Unix. Not exactly emacs, but serviceable. The hex debugger was good for debugging 8080 code. For the 4040, we still needed to SneakerNet the PROMs to the 4004 SBC.
My 8080 code was a good bit bigger than the 4040 code, so the big bottleneck was reading and punching all those paper tapes at about 8 bytes per second. We couldn't do much about the punch side--you can only make holes in paper so fast. But we could improve the reading side. We found an ultra-cheap optical tape reader (for the record, the ASR33's reader used little mechanical fingers to read the holes). To me, the new reader was a marvel of Yankee ingenuity. It was asynchronous.
You see, the paper-tape format didn't depend on tape transport speed. The data format was self-clocking, thanks to a row of little synch holes. Theoretically, you could read the tape as fast as you could get it through the reader, limited only by the I/O speeds of the Intellec parallel port.
So to read our paper tapes, we simply pulled them through the reader, much like a sailor hauling in his anchor rope. There was no takeup reel; the tape simply spilled out onto the floor, as it always had.
I went out and bought a used film editor, which had a crank-turned feed. We cobbled up a wider reel to hold the paper tape. From then on, reading a file into memory became, literally, a turn-the-crank process.