Design Con 2015

Tips on building & debugging embedded hardware & software designs: Part 2

November 01, 2010

JackGanssle-November 01, 2010

Shhhh! Listen to the hum. That’s the sound of the incessant information processing that subtly surrounds us, that keeps us warm, washes our clothes, cycles water to the lawn, and generally makes life a little more tolerable. It’s so quiet and keeps such a low profile that even embedded designers forget how much our lives are dominated by data processing.

Sure, we rail at the banks’ mainframes for messing up a credit report while the fridge kicks into auto-defrost and the microwave spits out another meal. The average house has some 40 to 50 microprocessors embedded in appliances. There’s neither central control nor networking: each quietly goes about its business, ably taking care of just one little function. This is distributed processing at its best.

Billions and billions of 4- to 16-bit micros find their way into our lives every year, yet mostly we hear of the few tens of millions that reside on our desktops.

Now, I’d never give up that zillion-MIP little beauty I’m hunched over at the moment. We all crave more horsepower to deal with Microsoft’s latest cycle-consuming application. I’m just getting tired of 32-bit hype for embedded applications. Perhaps that 747 display controller or laser printer needs the power. Surely, though, the vast majority of applications do not.

A 4-bit controller that formed the basis for a calculator started this industry, and in many ways we still use tiny processors in these minimal applications. That is as it should be: use appropriate technology for the job at hand.

Derivatives of some of the earliest embedded CPUs still dominate the market. Motorola’s 6805 is a scaled up 6800 which competed with the 8080 back in the embedded Dark Ages.

The 8051 and its variants are based on the almost 20-year-old 8048. 8051s, in particular, have been the glue of this industry, corresponding to the analog world’s old 741 op amp or the 555 timer. You find them everywhere. Their price, availability, and on-board EPROM made them the natural choice for applications requiring anywhere from just a hint of computing power to fairly substantial controllers with limited user interfaces.

Now various vendors have migrated this architecture to the 16-bit world.

I can’t help but wonder if this makes sense, as scaling a CPU, while maintaining backward compatibility, drags lots of unpleasant baggage along. Applications written in assembly may benefit from the increased horsepower; those coded in C may find that changing processor families buys the most bang for the buck.

Microchip, Atmel, and others understand that the volume part of the embedded industry comes from tiny little CPUs scattered with reckless abandon into every corner of the world. These are cool parts! The smaller members offer a minimum amount of compute capability that is ideal for simple, cost-sensitive systems. Higher-end versions are well suited for more complicated control applications.

Designers seem to view these CPUs as something other than computers. “Oh, yeah, we tossed in a couple of PIC16s to handle the microswitches,” the engineer relates, as if the part were nothing more than a PAL. This is a bit different from the bloodied, battered look you’ll get from the haggard designer trying to ship a 68030-based controller. The micro-controller is easy to use simply because it is stuffed into easy applications.

L.A. Gear sells sneakers that blink an LED when you walk. A PIC16C5x powers these for months or years without any need to replace the battery. Scientists tag animals in the wild with expendable subcutaneous tracking devices powered by these parts. In addition to their use to partition the code, there are other compelling reasons as well.

A friend developing instruments based on a 32-bit CPU discovered that his PLDs don’t always properly recover from brown-out conditions. He stuffed a $2 controller on the board to properly sequence the PLD’s reset signals, ensuring recovery from low-voltage spikes. The part cost virtually nothing, required no more than a handful of lines of code, and occupied the board space of a small DIP. Though it may seem weird to use a full computer for this trivial function, it’s cheaper than a PAL.

Not that there’s anything wrong with PALs. Nothing is faster or better at dealing with complex combinatorial logic. Modern super-fast versions are cheap (we pay $12 in singles for a 7-nanosecond 22V10) and easy to use, and their reprogrammability is a great savior of designs that aren’t quite right. PALs, though, are terrible at handling anything other than simple sequential logic.

The limited number of registers and clocking options means you can’t use them for complicated decision making. PLDs are better, but when speed is not critical a computer chip might be the simplest way to go.

As the industry matures, lots of parts we depend on become obsolete. One acquaintance found the UART his company depended on was no longer available. He built a replacement in a PIC16C74, which was pin-compatible with the original UART, saving the company expensive redesigns.

In the good old days of microcomputing, hardware engineers also wrote and debugged all of the system’s code. Most systems were small enough that a single, knowledgeable designer could take the project from conception to final product. In the realm of small, tractable problems like those just described, this is still the case.

Nothing measures up to the pride of being solely responsible for a successful product; I can imagine how the designer’s eyes must light up when he sees legions of kids skipping down the sidewalk flashing their L.A. Gears at the crowds.

Part of the recent success of these parts comes from the aggressive use of Flash and One-Time Programmable (OTP) program memory. OTP memory is simply good old-fashioned EPROM, though the parts come without an erasure window. That small quartz opening typical of EPROMs and many PLDs is very expensive to manufacture.

You can program the memory on any conventional device programmer, but, since there’s no window, you can never erase it. When it’s time to change the code, you’ll toss the part out.

Intel sold OTP versions of their EPROMs many years ago, but they never caught on. A system that uses discrete memory devices—RAM, ROM, and the like—has intrinsically higher costs than one based on a microcontroller. In a system with $100 of parts, the extra dollar or two needed to use erasable EPROMs (which are very forgiving of mistakes) is small.

The dynamics are a bit different with a minimal system. If the entire computer is contained in a $2 part, adding a buck for a window is a huge cost hit. OTP starts to make quite a bit of sense, assuming your code will be stable. This is not to diminish Flash memory, which has all of the benefits of OTP, though sometimes with a bit more cost.

Using either technology, the code can be cast in concrete in small applications, since the entire program might require only tens to hundreds of statements. Though I have to plead guilty to one or two disasters where it seemed there were more bugs than lines of code, a program this small, once debugged and thoroughly tested, holds little chance of an obscure bug. The risk of going with OTP is pretty small.

< Previous
Page 1 of 7
Next >

Loading comments...

Most Commented

  • Currently no items

Parts Search Datasheets.com

KNOWLEDGE CENTER