How I got embedded: a special connection

May 02, 2012

JackCrens-May 02, 2012

Take a trip through the life's work of one engineer who was involved in embedded systems from Day 1 (courtesy of Rambling Jack).

Click for more content from ESD May 2012
At this moment, you're looking at the last print issue of Embedded Systems Design magazine. The occasion is especially poignant for me, because so much--20+ years--of my career has been tangled up with the magazine in general, and the Programmer's Toolbox column in particular.

Some folks have been blessed (or cursed) by careers that are "linear." They start one job, stay with it, move up the ladder, and retire happy. Mine hasn't been that way. It's taken some sometimes-unexpected twists and turns--some more pleasant than others. Not all of those directions have had anything whatever to do with embedded systems. I thought, however, that you might enjoy hearing about the ones that did. But first, I need to set the stage with a little background.

Giant brains
I've been involved with computers for a long time. How long? Here's a hint: The textbook for my first computer science class, in 1956, was entitled Giant Brains, or Machines That Think. Back then, the notion of "micro brains" wasn't even a blip on anyone's radar. Computers were--and, we assumed, always would be--monster, power-hungry machines that filled large rooms with glass walls, raised floors, and over-engineered cooling systems.

The computer room of 1960 felt more like a cathedral than a place of science, and it had its share of mysterious icons, rituals, a small army of acolytes, and a hierarchy of priesthood, from floor supervisors to managers to that highest of all high priests, the systems administrator.

This was my world for the next decade or so. Not that I actually got to enter the computer room, of course. That privilege was reserved for the anointed. We mere engineers and scientists were not welcome. Heck, it was two years before I even saw the computer, and that was from the outside of those glass walls, looking in. My only contacts with it were the "keypunch girls" who punched my card decks, and the clerk behind the counter who accepted my jobs and returned their results. If, on rare occasions, I interacted with the priesthood, it was in hushed and reverent tones, and a proper air of respect. I resisted the urge to genuflect.

Now, when you consider that the purpose of the computer was, after all, to help us scientists and engineers solve our problems, it may seem hard to understand why we customers were treated so shabbily. The explanation has to do with money and bureaucracy.

In those days, computer time was expensive: $600 per hour. That's in 1960 dollars, when a Coke cost a nickel, gasoline 25 cents per gallon, and that $600 would pay my salary for six weeks. So the priesthood tended to guard the computer jealously. To them, we were not so much valued customers, as necessary evils to be tolerated, however grudgingly.

The systems administrator was not judged on how many problems that were solved, but by his ability to keep the computer backlog down. Backlog, as in the number of jobs waiting in the queue. The easiest way to keep the backlog down was simply to deny access to the job queue, or to abort jobs on the flimsiest of excuses. In one shop, I had jobs rejected because the card deck had too many rubber bands around it. Other times, too few. One computer group actually issued written guidelines for how many rubber bands should be used, per inch. The only problem was that the computer operators didn't follow their own guidelines. So a deck they returned to me was likely to be rejected on the next turnaround.

Despite the oppressive, Big Brother environment, we got exciting things done. We did, after all, help Neil and Buzz walk on the Moon. What's more, it was in this environment that I learned my craft and developed techniques that I still use today.

I did, however, take one thing away from the experience: A deep and abiding hatred of systems administrators.

First personal computers
During those oppressive years, I found a glimmer of hope and a glimpse into the future. I discovered that not all computers had to be large, nor limited in access. Around 1961, I gained access to what we'd now call a personal computer.

The Royal McBee LGP-30 was about the size of a desk.3 A vacuum-tube machine, it had a grand total of 15 flip-flops. Its only memory was a 4k, magnetic drum. All the data--even the machine registers--resided there. Bits marched to/from the drum in serial fashion. "Bulk storage" was rolls of paper tape.

As primitive as the LGP-30 was, it offered important advantages over the Giant Brains. First, I didn't have to beg for permission to use it, No accountant or systems administrator stood behind me, tapping his foot. And though the computer was shared among our team, I had virtually unlimited access to it. Saving computer clock cycles was no longer an issue.

Most importantly, I could use the LGP-30 interactively. I'd sit down at its console, a modified electric typewriter, and type. Answers came back in seconds, on the same page. I didn't have to learn and use machine language; the LGP-30 sported a primitive interpreter. If you think of it as a 1960 version of an Apple, with paper tape instead of tape cassettes, you won't be far from wrong.

Using that computer, I formulated a philosophy that I've held ever since: Having virtually unlimited and interactive access to a small computer is infinitely preferable over needing an act of Congress and wading through a hierarchy of bureaucrats, to get batch-mode access to a big one. Limited though a small computer might be, a "turnaround" time in seconds trumps one in hours or days.

I got a lot of problems solved with that old computer, but my main take-away was a dream. Someday, I vowed, I'd have a computer of my very own. I wouldn't have to justify my use of it to anyone. Its only job would be to sit on my desk, waiting for me to give it something to do. And if I chose to use it frivolously, inefficiently, or not at all--well, that would be entirely up to me.

Changes in the winds
Fast forward to 1970. I was still programming an unseen mainframe in FORTRAN. That particular mainframe wasn't even in our building; it belonged to NASA. Our only contact with it was a courier, who made twice-daily runs to pick up our card decks and return printouts. Turnaround time was 24 hours. To keep the pump primed, each time we got a run back, we'd pore through the printout with a red pen in hand, marking it up for the next cycle.

But big changes were on the horizon: I'd been reading about these newfangled gadgets called minicomputers. Though far more capable than my old "desk" computer, a typical minicomputer was about the same size and price, and had a similar, interactive user interface: A Teletype console with paper tape I/O.

Most exciting, people began connecting minicomputers to the real world.

In truth, we could have done the same thing with a big mainframe. All computers have I/O ports and support interrupts. With enough specialized (and very expensive) interface devices like analog-to-digital (A/D) and digital-to-analog (D/A) converters, a computer could interact with its surroundings. But unless you had a budget equal to NASA's, you weren't likely to use a mainframe that way.

With minicomputers, cost wasn't such an issue. What's more, minicomputers tended to have more uncommitted I/O ports and interrupts, and the cost of A/D and D/A converters was plummeting. Suddenly, all over the world, people were hooking their minicomputers up to all manner of external machinery, including factory assembly lines and research lab equipment. The era of real-time and embedded systems had arrived.

One day, I was walking down the halls at my job, and passed room after room of guys poring over fan-fold printouts, marking them up with red pens. I had the thought, "these guys--indeed, ALL us guys--will soon be as obsolete as dodo birds." I resolved not to let that happen to me. I resolved to get involved with minicomputers and real-time systems.

Chasing the dream
In a nice, orderly, and linear world, I would have followed up that resolution. But my world has been anything but linear. To explain, I have to rewind to the mid-1960s. I was back at college, chasing another degree. For both my teaching duties and NASA research, I found myself back in the world of FORTRAN and mainframes. But at home, in my "spare time," my thoughts turned back to my dream: A computer of my very own. During those years, I was hardly alone. Some enterprising souls actually managed to realize their dream, assembling their own minis from surplus parts. Others settled for slices of a time-shared mini, like the BASIC system developed at Dartmouth. My own thoughts, however, took a more primitive turn: I wanted to build a homebrew computer from scratch.

Two unrelated events had steered me in that direction. First, in a GE semiconductor manual, I found a very nice tutorial on Boolean logic. Fascinated, I learned all about 1's and 0's, ANDs and ORs, exclusive ORs, and De Morgan's theorem. I learned about flip-flops. I learned about circuit minimization and Karnaugh maps. Just for fun, I would pick some logic problem (Example: build me a circuit to drive a seven-segment display) and work out a gate-level solution.

Second, Fairchild introduced a line of low-cost integrated circuit (IC) logic devices. Even a grad student could afford a dual NOR gate 80 cents or a J-K flip-flop for $1.50. I bought a bunch of them, and spent many glorious hours making lights blink, sensing pushbutton inputs, and mechanizing some of those logic solutions. I built a couple of useful gadgets: a lap timer for a racetrack and a counter-timer-frequency meter for myself. For the first time, my dream of a homebrew computer seemed within reach.

As my design evolved, I took a page from that old LPG-30, and used serial logic, with shift registers ICs replacing its magnetic drum memory. My next problem was I/O. For output, I wanted to display decimal digits. After much trial and error, I settled on the idea of displaying seven-segment digits, drawn cursively on an x-y-z oscilloscope. I worked out the waveforms I'd need, and etched and soldered circuit boards to generate them. I had gotten as far as displaying a single digit, when my plan was severely rerouted by advancing technology.

The microprocessor
When Intel's 4004 burst on the scene, it not only changed my plans, it changed the world forever. I asked myself: Why should I bother building a homebrew computer from discrete gates, when I could get the whole CPU on a single chip? Interestingly enough, this vision wasn't shared by many respected computer gurus and manufacturers. Even Intel themselves, when they introduced the 8080, wrote an article showing how it could be used to control a traffic light. That's one traffic light. Three sets of light bulbs, and four pressure sensors. That kind of use seemed to be the limit of their imaginations.

Even years later, respected computer gurus were saying things like:

  • "A microprocessor will never be used as a general-purpose computer."
  • "A high-order language compiler will never run in a microprocessor."
  • "Why would anyone want more than 20k of RAM?"

But for those pioneers who had been building homebrew computers out of surplus core memories, discrete logic, and duct tape, the intellectual leap from controller chip to general-purpose computer was obvious. One thing's for sure: entrepreneurs were not just building, but marketing kits. Three or four used the Intel 8008. I started writing software for it, including a complete floating-point package.

Personal computers
If the first microprocessors changed the world, the next event shook it to its core, and created an industry of unprecedented scope. Only a month or two after Intel released the 8080, Ed Roberts, owner of the electronics firm MITS, announced the 8080-based Altair kit. This was no bag of parts or a set of etched circuit boards; the Altair was a real computer, with a rugged power supply, bus structure, and a beautiful case.

What's more, it only cost $395--just $45 more than the CPU chip alone. The day I saw the ad in Popular Electronics, I bought one.

Finally, real time
The next event didn't change the world at all, but it sure changed mine. Not wanting to become a dodo bird, I'd been looking for a chance to get into a micro-based business. At a Huntsville electronics trade show, I met Paul Bloom, president of Comp-Sultants. At his booth, Paul was displaying the components for his own 4040-based computer kit, the Micro 440. That was enough for me. Hands were shaken, some money changed hands, and I ended up as Comp-Sultants' software guy and its manager. Which interprets as among other duties, I got to handle phone calls from irate customers, deal with door-to-door salesmen and beggars, keep the toilet working, and sweep the chad from the floor. In my "spare" time, I had to direct our four technicians and develop software, Paul and I had our differences, mostly about money and "vision," but you have to give him this: He was a true visionary. Where I was still stuck in homebrew computer mode, Paul saw the value of the microprocessor in its commercial value in real-time processor controllers. Before I arrived on the scene, he already had two products under development: A controller for a cold-forge machine, and another for a plastic injection-molding machine.

To say that our "laboratory" was primitive would be far too kind. Paul's designs used the Intel 4040. His "development system" consisted of a 4004-based single-board computer, a primitive ROM-based assembler, and a Teletype ASR-33. Intel had upgraded the assembler to support the 4040. Our test equipment consisted of an equally primitive bus monitor, a multimeter, and an oscilloscope.

To hold our software, we used UV-erasable EPROMS. But we had no EPROM eraser. Instead, we just put the EPROMs outside, on the hood of someone's car, and let the Sun do the job. Sometimes, the software acted strangely. Do you think maybe a cloud passed over the Sun?

When I arrived on the scene, we bought the much more capable Intel Intellec-8, which improved our capabilities big time. The Intellec-8 included both a better assembler and a PROM reader-burner. That's the way we burned PROMs for the cold-forge machine. But more importantly, we could now develop software for the 8080.

Paul sold a contract for software to control a satellite-tracking antenna. It was to include a two-state Kalman filter (KF)--surely one of the first KFs in a microprocessor. I wrote the software for it.

A KF is best implemented in floating point. So I ported my 8008 floating-point package to the 8080. It was on this project that I learned the value of RAM bytes and clock cycles. For each assembly-language subroutine, I counted the clock cycles and bytes used. The end result was a package that was smaller and more efficient than both Intel's own package, and Microsoft's (yes, Bill, I disassembled your code). I also had to program the fundamental functions: square root, sine, cosine, and arctangent. My algorithms eventually found their way, first into the pages of ESD, and then into my book (Math Toolkit Real Time Programming).

Now, you have to ask: If we were developing microprocessor-based, real-time systems in 1975, how come we didn't become rich and famous billionaires? Answer: We tended to snatch defeat from the jaws of victory. My Kalman filter worked like a champ, but I can't say as much for our other products.

As it turned out, the Micro 440 kit that Paul had at that trade show was a myth. He had some of the real parts, but most were just random circuit boards, put there for show. We did eventually complete the Micro 440, and even sold a few, mostly to universities. But we had made a serious marketing error. We thought that the public wanted lower cost, and the Micro 440 was $100 cheaper than the Altair. In reality, our customers wanted a horsepower race: The more RAM, longer words, and faster clock speed, the better. The hobbyists saw our ads, yawned and moved on.

Paul made another egregious error. Before I came along, he needed a programmer for his cold forge machine. Thinking to get one on the cheap, he went to the computer science department at the University of Alabama, Huntsville, and asked them for their smartest senior. Tim may indeed have known computer science, but he turned out to be the most inept programmer I've ever known (and that's saying a lot). He wrote impossibly obtuse and lengthy code, filled with flags and branches, for even the simplest algorithms. We had to scrap his code for a TTY interface because it filled the entire memory.

Worse yet, his only way of testing the software was to go to Nashville, 100+ miles away, and plug the EPROMS into the machine. It's called the Big Bang theory of testing. And when you're talking of a big machine with 5000-psi hydraulics, you're talking about a BIG bang! Week after week, the software misbehaved, destroying the machine and sending technicians diving behind crates. In the end, I fired Tim, rewrote the software myself, and delivered a working product. But not before the due date on the contract had expired. The customer took our system, said, "Thank you very much," and walked away.

The future of the company now depended on my antenna controller. We delivered that one on time, and it exceeded its performance spec. In fact, it may have been a little too good. We had been hoping that, if we did a good job, we'd get a follow-on contract to refine and extend the code. Turns out, the customer was happy with version 1.0, so that was that.

Gyros, ships, Missiles, torpedoes After Comp-Sultants, I wandered into other jobs, including teaching computer science at a local university, writing software requirements for NASA projects, and even a stint as a chief engineer for Heathkit. My next job with real-time systems involved developing software for the 16-bit Zilog Z8000. The company made navigation systems using ring-laser gyros. One of our top scientists had developed an algorithm that used a computer to predict and compensate for gyro errors. It promised to improve performance by an order of magnitude. My job was to turn the algorithm into code.

On this job, my biggest problem was clock cycles. To implement the algorithm, I had to make the software generate those same fundamental functions--square root, sine, cosine, and arctangent--in a millisecond. That's 1000 clock cycles. To make this happen was without a doubt the biggest challenge I've faced. I didn't even have time to store and fetch data to/from RAM. As much as possible, I had to keep them in CPU registers. To optimize the register usage, I used graph-coloring algorithms, as an optimizing compiler does.

I had another idea that I'm kinda proud of. The issue was: how do you test an algorithm to see if it's working right? You can single-step through the code, but once the gyro is connected, you can't stop the CPU. To solve the problem, I hooked up another Z8000, and let it share memory with the unit under test. The CPU couldn't wait to peek and poke all the CPU registers, but I could afford to do about four of them. So I added software that would peek at the registers, grab four of them, and then--if asked--poke new values. It worked like a charm.

I had one more job involving the Z8000. We had an existing ship navigation system. Our company had sold the Navy the idea of using it in an old torpedo, basically turning the torpedo into a self-locating mine. The catch was, we had only 2 weeks to deliver a prototype.

Testing a navigation system is not usually a simple thing. You can spend two weeks just calibrating the sensors. We couldn't do that, but we could deliver a working, if not perfect, system. Instead of a calibrated, computer-controlled test rig, we simply put the box on a desk, and verified that it could find "down." After we'd done that in a few orientations, we rotated the box back and forth by hand, and verified that it knew "North."

The next test was way cool. The torpedo had a tachometer on the propeller, which it used to "trim" the position calculations. We couldn't give the nav system a propeller, but we did the next best thing; we hooked up a square-wave generator to the prop input, so the system would think it was moving at constant speed. Then we put the system, a battery, and a terminal on an equipment cart. One of our techs had been practicing, pushing the cart around the parking lot at a constant pace. When we had him push our system around a big loop and back to the starting point, we closed the loop within 15 feet. Not bad, for a calibrated technician.

I did two more embedded systems for that company: A large, ground-to-ground missile and an experimental ship navigator. The ship navigator required me to implement an 18-state Kalman filter. Both systems were successful. The missile is currently deployed, and the ship navigator achieved the highest accuracy ever achieved, to that time.

My next real-time job was among the most fun, mainly because I got to recommend all the parts of our development system and tools. It was for yet another satellite tracking antenna, only this one was in an airplane, and therefore bopping around instead of bolted to the ground. We chose the Motorola 68332 chip. We used the Intermetrics C compiler, which included a very nice source-level debugger. For once, I didn't need floating-point software, but I still needed--guess what--functions for square root, sine, cosine, and arctangent. We also gave the customer an added feature. Instead of a readout showing hex numbers in green, uppercase characters, we gave him a Windows-based interface, complete with multicolored graphs, simulated compass needles, spin-dials, and point-and-click inputs. Nice.

I'm particularly proud of the operating system I wrote for this job. First, it used all the real-time features of the 68332--watchdog timer, counter-timer, etc., to the fullest. More importantly, it was rather unique in that the interrupt handler was itself reentrant. That is, the system could tolerate one or more new interrupts coming in, while it was still processing the last one. The only requirement was that the average time required to service the interrupt had to be less than the interrupt time

We delivered this system on time and on budget, and it met its performance specs. We also did several other jobs for this company, all successful. As a "reward" the customer, thinking to save some money, cut me out of the next contract, and hired my ex-partner. My only satisfaction out of the deal came from learning that, because he didn't understand the OS, his job went two years over schedule, was way over cost, and under performance. The company is no longer in business. Payback is sweet, but I'd much more have preferred that both our companies had prospered.

The medical business
I have one more job to tell you about. A medical electronics firm wanted to replace their existing patient monitor, which used something like eight Z80's, with a new one using a single Intel 80286. Our job was to port the code from Z80 assembler to C. Now, as you might guess, medical electronics is special because if it fails, it has the potential to kill people. The FDA has very strict rules for certifying a given system as safe. To create the new patient monitor, we had to assure the FDA that it would perform exactly the same functions, using the same algorithms, as the old one. That's not easy when you're changing both the CPU(s) and the programming language. To make sure that happened, I first had to understand, in precise detail, the Z80 code in the old one. And that was the rub, because while the original programmer was superb, he wasn't big on comments. As in, there were none. So before I could write the first line of C, I had to psychoanalyze the Z80 code, commenting and studying it until I knew it as well as I know my own memories.

To get that job done, I used every programming trick I'd ever learned, and then invented some more. It was a hard job, but also exciting and instructive. I extended the concept of the dataflow diagram, to include real-time interrupts, synchronous, asynchronous, and background tasks.

My design evolved into a set of nested, hierarchical state machines. It was very slick, if I do say so myself. In any medical application (and most other real-time applications), it's important to quickly respond to, and recover from, errors. In my design, each state machine returned an enumerated state identifying, some of which were typically error conditions. Looking back, I see that the design amounted to sort of a do-it-yourself exception mechanism. It worked just fine, and the software turned out to be both robust and error-free.

See Jack write
I've always enjoyed writing. My first creation, as I recall, was a comic book when I was around 10. During my aerospace days, I wrote many technical reports, white papers, and some journal articles. The biggie was my Ph.D. dissertation, which ran to 140 pages. Later I was writing software requirements specs, test plans, etc., for NASA and DOD customers.

When I got involved with microcomputers, I wrote a few articles for magazines like Byte, Kilobaud, and ProFiles, the house organ for the Kaypro computer. In 1988, I started writing a tutorial on compiler construction. It's still on the web here at www.freetechbooks.com/let-s-build-a-compiler-t56.html.

JD Hildebrand, then editor of Computer Language magazine, saw the tutorial and asked me to write an article on the topic. I did, and presented a paper at the next Computer Language Conference. That article and paper started my long relationship with Computer Language and its sister publication, Embedded Systems Programming. Computer Language also maintained a forum (CLMFOR) on CompuServe, and I spent many hours chatting about computers, and almost any other topic, with its denizens.

In 1992, I was laid off from my day job. It was most probably the shortest of unemployment periods on record. That night, I got on CLMFOR and said, "Guess what? I've been laid off. Anyone want me to write an article or two?" JD responded immediately, with "I can take an article every two months." Tyler Sperry, then editor of Embedded Systems Programming, said, "I'll take one a month."

And that was that. I didn't stay "unemployed" forever. I worked at other companies, and ran my own company for a time. But my writing for Embedded Systems Programming started then, and has never stopped.

And, as Paul Harvey used to say, "Now you know the rest of the story."

Jack Crenshaw is a systems engineer and the author of Math Toolkit for Real-Time Programming. He holds a PhD in physics from Auburn University. E-mail him at jcrens@earthlink.net.

This content is provided courtesy of Embedded.com and Embedded Systems Design magazine.
See more content from Embedded Systems Design and Embedded Systems Programming magazines in the magazine archive.
This material was first printed in May 2012 Embedded Systems Design magazine.
Sign up for subscriptions and newsletters.
Copyright © 2012
UBM--All rights reserved.

Loading comments...

Parts Search Datasheets.com

KNOWLEDGE CENTER