16-Bit: The Good, The Bad, Your Options (Special Report) - Embedded.com

16-Bit: The Good, The Bad, Your Options (Special Report)


Pressured from below and above, it would seem that the 16-bit market has little new to offer. A creative redefinition of “16-bit,” however, opens new avenues.

16-Bit: The Good, The Bad, Your Options

by Rick Grehan

16-Bit MCU Processor Table

These days, the 16-bit processor is caught in a squeeze play. Pressure is being applied from below and above.

Consider first the pressure from below. At one time, the 16-bit processor had a clear performance and memory advantage over eight-bit CPUs. The wider bit-width of the 16-bit machine meant that in a single cycle, a 16-bit machine could do more work than an eight-bit machine in that same cycle. But now, with processors like the 100MHz eight-bit CPU from Scenix, the speed advantage is largely gone. (Sure, you need a faster clock, but who cares?)

Then there was the address space advantage. Many 16-bit systems offered address ranges beyond the 64K of the eight-bit world—again, no longer. Many 8051 derivatives now offer bank-switching that allows megabytes of external memory. Compilers from companies such as Keil handle the paging scheme, relieving the programmer of the low-level details of bank-switching.

From above, the horde of 32-bit processors is too numerous to count: MIPS, PowerPC, 68K, Pentium, M-Core, and so on. New variants of 32-bit processors appear almost daily, and they have made serious inroads into embedded devices ranging from game machines to industrial control. The 32-bit processors have speed, large address spaces, and—most importantly—a healthy infrastructure of development software support. Perhaps the only thing saving the 16-bit market from being overwhelmed by the 32-bit processors is price, and even that is a shifting frontier.

Still, the 16-bit market is alive, well, and growing. And if we’re willing to stretch the definition of “16-bit processor” a little, several new choices become available in a market that otherwise seems empty of new architectures.

What is 16-bit?

Anyone writing an article on processors is forced to address the disputable matter of definitions. What exactly is a 16-bit microprocessor?

No single answer will suffice. Microprocessor manufacturers will not permit one. Many eight-bit processors allow two eight-bit registers to be ganged together to create a single 16-bit register (this was true as far back as the 8080), and device manufacturers will argue that such a capability gives that processor the right to claim 16-bitness. Similarly, some 32-bit processors attach to memory via a 16-bit data path (the M-Core, for example); their manufacturers will point to the processor’s resulting ability to use less-expensive 16-bit memory. Therefore, the processor offers a 32-bit solution in what would otherwise be a 16-bit space.

Nonetheless, pursuing the original question does lead to two trends worthy of attention. The first has been around since 1997 (on paper, at least), and involves what has been described as “reduced” instruction set microprocessors. Two specific examples come instantly to mind: the MIPS16 architecture (called TinyRISC by LSI) and the ARM Thumb.

RISC instructions are typically a fixed bit-width (32 bits for both ARM and MIPS). The MIPS16 and Thumb instructions sets are, more precisely, 16-bit instruction subsets. The MIPS16 and Thumb 16-bit instructions map to functioning subsets of the full 32-bit-wide instruction set of each processor. Inside each processor (the Thumb is a core) is a full 32-bit RISC engine. Each 16-bit instruction that arrives is expanded to a full 32-bit instruction.

Understandably, this required some intelligent corner-cutting. For example, the standard MIPS architecture has 32 general purpose registers; MIPS16 can only address eight. Also, the MIPS16 instruction set includes no provisions for accessing the floating-point coprocessor. Both Thumb and MIPS16 have provisions for switching between 16-bit and 32-bit mode so that computationally intense subroutines can be executed at full speed.

Nevertheless, these “reduced RISC” processors offer yet another solution for embedded developers. If you need the computing capacity of a 32-bit processor but your memory constraints seem to beg for a 16-bit machine, perhaps a MIPS16 or a Thumb processor is in your future. (As mentioned earlier, M-Core should be considered in this group as well).

The second trend is analogous to a mirror. Just as many MCUs have been made more “DSP-like” with the addition of signal-processing-friendly instructions (for example, the single-cycle multiply-and-accumulate instruction, found on processors such as Hitachi’s H8 and Motorola’s HC16), DSPs are becoming more MCU-like. Sixteen-bit DSPs such as Motorola’s DSP568xx series are actually touted as having MCU-type instruction set architectures. (Motorola’s documentation for its DSP568xx series describes its instruction set as being “highly efficient for C compilers.”) We’ll look at this trend in more detail later.

Why 16 bits?

With such an abundance of competing eight- and 32-bit processors, what would compel a developer to select a 16-bit MCU rather than an eight-bit or 32-bit processor?

In the past, the 16-bit processors’ higher clock speeds and wider data paths gave a performance edge over eight-bit devices. That edge was dulled by the appearance of processors such as the Scenix 100MHz eight-bit CPU. Announced in 1998, the SC18/28AC100 boasts a four-stage pipeline that allows it to run at 100 MIPS. The Scenix processor is particularly attractive for real-time processing because its interrupt scheme is deterministic (the interrupt response time doesn’t depend on which instruction is executing when the interrupt arrives).

Another past advantage of 16-bit systems over eight-bit processors was the availability of development tools. Notice that I said “past.” Powerful cross-development environments are available for virtually every well-known eight-bit processor. You can even get C compilers for the “memory-challenged” Microchip PIC series of eight-bit processors.

It’s likely that a choice for a 16-bit part is dictated primarily by the most prevalent data types in the target application. As put by Tom Cantrell, West Coast editor for Circuit Cellar Ink , “If the main data type predominating your system is greater than eight bits, that’s an argument for a 16-bit machine.” Suppose, for example, your application is collecting data from a 12-bit ADC; or, perhaps, sending digital data to a DAC of similar width. With an eight-bit processor, your application would spend some portion of its time assembling bytes into words, or disassembling words into bytes. Use a 16-bit processor instead, and the better fit of prevalent data type to the processor’s natural word length will make for more efficient code (and, if you’re programming in assembly language, code that’s easier to read).

This approach is sometimes called “right-sizing,” which combines application needs and system cost. For example, whereas a 32-bit processor can cost anywhere between $10 and $300, a 16-bit part can be as low as $1 to $5.

Application complexity is another factor in processor selection. “Simple” applications, suitable for eight-bit processors, are typically content to execute “close to the metal,” without the run-time assistance of either a kernel or OS. Those applications that require an OS are often better suited to 16-bit processors, where the larger register sets provide a better environment for an operating system.

Another reason for selecting a 16-bit processor is a term you don’t often hear in the microcomputer world: “legacy code.” The software grist for many an embedded project is code inherited from previous projects. In addition, some projects—particularly those in the defense industry—can take years to move from the analysis phase to the implementation phase. In the former case, no one wants to jettison code that works; in the latter case, no one wants to change platforms while a project is still on the launchpad.

For example, VAutomation Inc. sells synthesizable CPU cores. Two of their more popular cores are the V8086 and V186. The V8086 is compatible with the 8086 processor, while the V186 is a “pumped up” equivalent of the Intel 80186 (the V186 can address up to 16MB of memory, 16 times the address space of the original 80186). According to Eric Ryherd, president of VAutomation, “Legacy applications and familiarity with the x86 architecture are the primary selling points of our 16-bit processors.” Simply put, many developers don’t choose a 16-bit processor—they cling to it.

Finally, developers often select a 16-bit device because an application currently running on an eight-bit part has outgrown the capabilities of the byte-wide CPU. Many manufactures offer upgrade paths from eight-bit devices to 16-bit bigger brothers. Hitachi’s H8 series has both eight- and 16-bit members. The 16-bit versions look for all the world like the eight-bit devices, except with an additional eight “pure” 16-bit registers and a program counter extended from 16 bits to 23 bits. Motorola’s HC11 processor (popular in amateur robotics) has two 16-bit larger devices, the HC12 and HC16. And 8051 enthusiasts have the 16-bit XA processor from Philips to look forward to.

The DSP effect

There was previously a noticeable trend in the microcontroller industry to acquire more and more DSP-like features. The trend is still there. Probably every other 16-bit and better processor released in the last half decade boasted a single-cycle multiply-and-accumulate instruction. Even a moderate amount of Web searching will turn up application notes describing filter algorithms for most popular MCUs. Even eight-bit devices are getting into the act. The Scenix Web site describes signal processing algorithms for the SC18/28AC100 processor. And check out Microchip’s application note AN616, “Digital Signal Processing with the PIC16C74.”

Recently, however, that trend has been met by a parallel force: DSPs taking on more and more MCU-like features. A good example would be the Motorola DSP568xx digital signal processor’s C compiler-friendly instruction set. You’ll find a more blatant example if you look back at TI’s announcement of its TMS320C27x DSP in the spring of ’98, an “architecture that renders MCUs obsolete.”

Though MCUs may not yet be obsolete, if you’re in search of a 16-bit processor for your next application, the news of DSPs encroaching on MCU territory opens a new avenue of investigation (the TMS320C27x is a 16-bit processor). Perhaps you should consider a DSP, rather than a “conventional microcontroller,” as the processor for your application. Don’t turn away yet—hear me out first.

Many engineers see DSPs as esoteric devices suited only for limited applications and feel that programming the devices requires assembly language arcania best left up to a chosen few. Though it is still the case that the kernel of a signal processing function enjoys its best performance if written in assembly language (where the programmer can take direct advantage of the chip architecture), much of the user interface code (as well as “glue code” that holds the time-critical pieces of an application together) can be written in C.

I admit that DSPs tend to have processor and memory architectures that are tailored for signal-processing tasks, and befuddle someone more comfortable with a conventional CPU. And while it was true in the past that C development packages for DSPs have been few and weak, that’s no longer the case—even for 16-bit DSPs. Good development tools are available, and working with a DSP in C can do a lot to shield you from the unfamiliar architecture of the hardware.

From the hardware perspective, a plentiful selection of 16-bit DSP processors is available, and they’ve been around long enough for a healthy collection of library routines and programmer support to emerge. It goes without saying that they are ideal for embedded systems.

Furthermore, not all DSPs are designed solely to support signal processing applications. TI’s TMS320C/ F24X (a member of the venerable 320 family) and Motorola’s DSP568xx, for example, are built specifically to support motor control applications, as is Analog Device’s ADMC331 16-bit DSP. Most such DSPs have a wealth of on-chip devices and I/O peripherals, including timers, ADCs, and parallel and serial I/O.

In short, there’s no reason to discard 16-bit DSPs from your selection list simply because you’ve never worked with a DSP before. They’re fast, they’re getting cheaper all the time, and the tool support has reached a level of sophistication on par with that of other MCU development environments.

The band plays on

Regardless of the pressure from all the eight- and 32-bit processors, there’s no worry that your embedded toolbelt will one day be empty of 16-bit devices. Those eight-bit applications that “grow up” into 16-bit applications will always be around. Engineers wise enough to choose eight-bit CPUs that have 16-bit bigger brothers will have a reasonably smooth climb into the 16-bit world. And a tour of the Web will show enough 16-bit 80×86-based SBCs to prove that the world still has plenty of MS-DOS-savvy developers.

I’ve also pointed out the new avenue offered by 16-bit DSPs. Not only are there many to choose from, but they are all ideal for embedded applications. Their development software is as good as that for conventional processors. esp

Rick Grehan was senior editor of the embedded systems section of Computer Design , and senior editor at BYTE . He is the principle author of Real-Time Programming: A Guide To 32-bit Embedded Development , from Addison-Wesley. He currently works in the Discover Products group at Metrowerks, Inc. Contact Rick at rgrehan@top monad.net .

TABLE 1: 16-bit microcontroller checklist
**Application fit
Is the application’s data type a good fit for a 16-bit CPU? If all you’re doing is processing bytes, it’s likely an eight-bit device will suffice. On the other hand, if you’re doing a great deal of floating-point arithmetic, 32-bit processor with integrated FPU may be necessary.
* *Tool support
As always, make sure that the software development tools are mature and available. (Some of the large x86 compiler companies provide 16-bit “retro” tools that accompany their 32-bit toolsuites.) In addition, check for hardware development support. Most chip manufacturers offer development/prototyping boards and “reference platforms” that solve many up-front hardware design headaches. Finally, verify that device driver libraries are available for the peripherals you’ll be using.
In the analysis phase of your application development, look for clues that suggest elements of signal processing, motor control, or other functions that would work well on a DSP system. When it comes time to select the processor, if no 16-bit MCU fits the bill, rather than compromise on an eight- or 32-bit part, investigate the 16-bit DSP.

Return to Table of Contents

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.