Overlaps Between Microcontrollers and DSPs - Embedded.com

Overlaps Between Microcontrollers and DSPs



Microcontollers are primarily used in applications that are interrupt-driven, sensing and controlling external events. You can usually find DSPs in systems that require the precision processing of analog signals. This article describes how traditional DSP and MCU applications are crossing over into each other's territories.

Intel developed the 4004 microprocessor in 1971 for the Busicom desktop calculator (remember when a calculator took up a desktop?) and sold it for $200 a chip. It ran at 92.5kHz internally. Intel's initial strategy was to use this device to sell more memory chips. It was an unexpected success, and was quickly followed by a flurry of similarly enhanced devices from Intel, Motorola, Zilog, and Texas Instruments. Key to the success of the later devices were features like on-chip memory, I/O ports, and hardware peripherals, enabling these chips to economize PC board space in control-oriented applications.

In the past 15 years, digital signal processing has been seen as a specialized segment of an embedded development marketplace dominated by microcontrollers. Digital signal processors (DSPs) were initially used in highly specialized segments where precision processing of analog signals could not be accomplished effectively using conventional analog circuit components. In 1982, the first DSP, the Texas Instruments TMS32010, proved that this segment existed by combining specialized hardware for accelerating multiplication with a Harvard (dual bus) memory architecture, introducing the architectural enhancements that would be found on later digital signal processors.

Years after the introduction of the two architectures, speculation continues on whether a convergence can ever take place. Techniques are debated, architectures compared, and positions promoted. Many articles and papers have been written on this topic — and you're reading one of them now. The difference is that here I will discuss hard-core, real-world selection criteria, including time to market, track records of semiconductor companies, and quality of development tools.

The players

Deeply embedded microcontrollers are primarily used in control-oriented applications that are interrupt driven, sensing and controlling external events. The external environment is detected either by digital I/O, interrupt pins, or analog (A/D) inputs. The source of the signals to these pins comes from switches, analog and digital sensors, and status signals from other systems. Each input represents a piece of information on the status of some outside event. Outputs are sent to actuators, relays, motors, or other drivers that control events. In between is the trusty microcontroller, analyzing the inputs and the present state of the system, determining when to switch on and what to turn off. The software that does all this, that makes these decisions, does so in a mostly conditional fashion; that is, conditional jumps and bit manipulation and shifts are the staples of embedded control (“interrupts” is counted as a condition here — program flow is altered on the occurrence of an external event).

Traditional DSP applications are as subtle as a freight train — they are brute force mathematical applications, pure and simple.

DSPs, meanwhile, are traditionally found in systems that require the precision processing of digitized analog signals. The performance goal of a DSP architecture is to perform as many arithmetic operations as possible in the smallest number of cycles. Traditional DSP applications are as subtle as a freight train — they are brute force mathematical applications, pure and simple. Traditional DSPs use complex, compound instructions that allow the programmer to perform multiple operations with a single instruction cycle and increase the amount of useful processing done. For example, most are able to compute one tap of an FIR filter in a single cycle. DSP cores are crafted to be number crunchers, and to that end they must do two things well: first, a DSP core must perform multiple math functions, including multiplication, extremely quickly and second, a DSP needs to continuously feed the data path to the number-crunching computational units (so that they can continue to crunch away). It is pursuit of this functionality that makes the programming model of a DSP look so different when compared to a microcontroller. See Table 1 for more details.

Table 1: Summary of embedded processor architectural comparison
Microcontroller
   Efficiently resolve complex conditional control situations
System Requirement Feature Benefit
I/O control I/O ports with bit-level control Efficient (quick, small code) control
Direct interface to actuators, switches, and digital status signals
Peripheral communications Serial ports: SPI, I2C, MicroWire, UART, CAN Hardware support for expansion and external device communications
Precision control of actuators and motors Sophisticated timers and PWM modules Low software overhead control
Quickly resolve complex software program control flow Conditional jumps
Bit test instructions
Interrupt priority control
Efficient (quick, small code) program flow
Fast response to external events External interrupts
Multiple interrupt levels
Program control immediately redirected on event occurrence; minimal overhead
Conversion of sensor data Analog-to-digital converters Hardware support for external sensors
Digital signal processor
   Deterministic software behavior
System Requirement Feature Benefit
Software filters Multiply/accumulate unit Digital filtering in few cycles
Zero-overhead loops
Interface to codecs High-speed serial port(s) Hardware support for translation of analog signals
High data throughput from serial ports Peripheral DMA Fewer wasted cycles fetching data from serial ports
Fast data access Harvard architectures and variants Fast execution of signal processing algorithms

Size matters

Note that in Table 1, the benefits of the hardware features of microcontrollers translate into reduced code size and reduced board space-two issues critical to cost efficiency in embedded applications. Reduced code size results in smaller on-chip memory area; the denser the code, the smaller the chip die area. After all, the semiconductor business is, at its base, a real estate business where the cost is about $300 million per acre. Faster execution is also a decision point, but is more important now than before. By contrast, the benefits of DSP hardware features result in faster execution and improved data throughput. Code size has traditionally not been as significant as execution speed, but this too is changing. (I'll describe this in more detail later.)

Control-oriented systems have traditionally utilized only a microcontroller, but some embedded applications add a DSP accessible to the microcontroller's external memory space to speed processing of math-oriented tasks. Examples include digital motor control, robotics, hard disk drives, feature phones, and some electrical meters. These systems are primarily control oriented, with the DSP acting as a math coprocessor to the microcontroller.

In other applications, the MCU and the DSP share the load equally in managing different parts of the system. This has typically been seen in segments such as some communications and telephony applications.

Choosing embedded cores
for standard applications

Microcontrollers in deeply embedded applications (that is, with no external memory) have typically been chosen for their cost-effectiveness. Besides the cost of the product, factors that contribute to cost-effectiveness are code efficiency and the on-chip integration of hardware peripherals. DSP architectures have typically been chosen for their data throughput and their raw computational power. On-chip peripherals would have gone unused. A priority-based interrupt structure and other real-time control features weren't included because real-time control could interfere with the critical processing of the data stream.

Microcontroller applications didn't require a DSP's expensive performance enhancements. Each system was different, and the microcontroller and DSP were seen as two different animals. But the times, they are a-changin'.

In the past 10 years, an uncountable number of microcontroller and DSP architectures have been introduced, abandoned, enhanced, expanded, copied, and redesigned. A wide variety of embedded complexities are available for engineers' selection, from low-end four-bitters (not too far removed from the 4004) to 64-bit systems-on-a-chip. In each and every case, the customer's time to market has become a critical factor, as competition for finished goods has become fierce. This has caused the embedded engineers' definition of what constitutes a microcontroller or DSP “product” to expand based on new selection criteria for choosing products that will provide the shortest development cycle. Besides the actual silicon, the embedded product definition has expanded to include data sheets, application notes, availability of technical support, breadth of product roadmap and what has become the most significant factor: availability and quality of development tools.

Development tools have become one of the most important factors in choosing an embedded core. The behavior and electrical properties of the core have become a gating factor; after that, the next criteria are the availability, quality, and interoperability of development tools. A 2,000 MIPS 1mW SuperCore for 50 cents is completely useless if the software engineer is unable to program it using the available compilers and emulators. To the software engineer, the microcontroller or DSP isn't just a square piece of plastic — it exists in engineering reality in the user interface of the computer screen and keyboard that hosts the hardware and software development tools. Focus group studies have shown that the quality and interoperability of development tools are now the second most significant selection criteria in core selection, after price/performance (Beacon Technology Partners, Concord, MA, 1997).

This newer, more complex definition of what constitutes a microcontroller or DSP product is not a surprise for established embedded companies like Motorola and Texas Instruments, but for many other companies it is a new paradigm that is frustrating their push for new business.

The rest of the story

All of these points may seem to veer off the original theme, but what's at issue here is ensuring that the DSP or microcontroller being considered comes from a vendor who is committed to the entire product support of your core. Using the selection criteria of only the technical merits of the silicon, important usability issues can be overlooked, which can have devastating consequences. Market windows have been missed, resulting in reduced sales volumes, cancelled projects, and customers who have actually gone bankrupt because they judged the architecture purely on technical merits. Choosing a core isn't a sterile classroom engineering exercise; that impressive datasheet you hold in your hands might have a C compiler that doesn't work well with its in-circuit emulator, and both might be applicable to a previous version of silicon.

The funny things that happen
at semiconductor companies

Semiconductor companies are developing microcontrollers with hardware multipliers, barrel shifters, and Harvard architectures. DSPs are being developed with external interrupts, integrated peripherals, and register-based architectures. In reality, semiconductor companies are developing these devices for one of two reasons. First, there may be a new market focus on systems that require signal processing and real-time control. Second, it may just seem like a good idea (“if we build it, they will come”).

To understand today's market situation, we must understand how we got here. Semiconductor companies deal with two types of markets: distribution (large number of customers, high effort to manage, low-to-medium fluctuating volumes) and direct customers (small group of customers, manageable in scope, very high predictable volumes). During the growth of embedded cores in the 1980s, the microcontroller explosion was fueled greatly by the automotive marketplace, as devices developed for automotive applications migrated into distribution. This environment made Motorola SPS the biggest supplier of eight-bit microcontrollers today. Most serious players in the microcontroller market either developed products for the automotive market, went after niche markets, or gambled with emerging markets. (Microchip's PIC is a notable exception, with its successful broad market appeal.) DSPs were developed for the telecommunications and military markets and, for the most part, have remained there.

The late '90s have seen unprecedented opportunities for growth in the semiconductor industry. The explosive growth in personal computing, telecommunications, Internet technologies, telephony, and portable applications have sent many semiconductor companies scrambling to introduce products that can be used in these new, highly profitable segments. These segments are controlled by a growing group of established (and a few new) direct customers.

With the opportunities presenting themselves in these new markets, almost every semiconductor company has undergone multiple reorganizations while trying to keep up with these changing markets. Today, every semiconductor company-with the exception of those focused on very niche markets — has completely remade itself to go after these existing markets. As a result, two types of semiconductor companies have emerged: semiconductor companies that have traceable, long-term focused strategies with strong customer relationships and established track records, and semiconductor companies that don't.A notable symptom is present in the second, “unfocused” group: the rapid introduction, with fanfare, of new products, to then be quietly withdrawn within two years when their expected share of the market isn't quickly realized. Obviously, then, it is from the first group of focused semiconductor companies that the majority of the processor innovations are coming from, as they have the track record and the experience to service customers in these emerging markets.

Real-time control and signal processing in one

A commonality found in many of the emerging embedded markets is a need to process some form of analog data, whether a communications stream or multimedia information, while at the same time maintaining real-time control of external events. The mixture of the two vary as widely as the diversity of the systems. On the one extreme, a simple data acquisition system using an eight-bit microcontroller may need to perform DTMF encoding/decoding and simulate a 1200 baud modem. On the other extreme, a DSP in a voice compression application may want to change program flow based on external switches and status signals.

Even as these systems are relentlessly striving for lower cost, smaller board space, and lower power dissipation, they must also integrate as much useful silicon as possible on the die. The target for the semiconductor companies is then to increase the amount of useful work that is accomplished in every instruction cycle.

In cases where a mixture of both DSP and microcontroller functionality is needed, four choices are available:

  • Microcontroller
  • DSP
  • Microcontroller signal processor (MSP; Infineon TriCore or Hitachi SH-DSP, for example)
  • Both a microcontroller and a DSP (ARM Piccolo, for example)

The first two choices are obviously the traditional approaches. Using a pure DSP such as a TI TMS320C54x to do control-oriented applications, or using a conventional microcontroller such as an 8051 to do signal processing, are obviously unacceptable choices. However, some products lend themselves well to light duty in the other segment, as shown in Table 2. Note that this table only looks at conventional architectures.


Click on image to enlarge.

A number of potential system advantages exist to using one processor for both real-time control and signal processing tasks:

  • Reduced board space
  • Lower system power consumption
  • Lower system cost
  • More functionality to the system
  • Simplified system development

Light DSP processing in a microcontroller

To many microcontroller engineers, a DSP is an unfamiliar entity. Microcontrollers have regular instruction sets, either accumulator based or register based, with a friendly program model. “Friendly” doesn't mean that the MCU will buy you a Coke; rather, it translates into either lots of general-purpose registers or a small number of specialized ones. DSPs have lots of specialized registers and multiple buses, a concept that makes it more difficult to program compared to a microcontroller's friendlier architecture.

A software engineer would much rather stick with what is a known entity; that is, an engineer familiar with microcontrollers will use a microcontroller to do light-duty DSP tasks. If a microcontroller is available that has all the features and memory needed to perform control tasks while also providing enough performance to satisfy the signal processing requirements, the engineer will favor this solution. If the two critical issues of price/performance and development tool quality are met, the microcontroller would be a first choice for the application, especially if the engineer is already familiar with the development tools and/or the microcontroller. If the engineer has been battered by previous experiences, then the full processor product definition discussed earlier, as well as the company's reputation in the embedded market, will already have been considered.

Signal processing performance in a microcontroller can be enhanced in a number of different ways:

  • A fast multiply instruction, or even more preferable, a hardware multiply, allows certain filters to be implemented more efficiently
  • Regular cycle execution, in which all instructions execute in the same number of internal clock cycles, enhances deterministic behavior
  • Programmable interrupt controllers are useful in control applications. If signal processing is also being performed, effective use of a programmable interrupt control unit ensures that signal processing has both interrupt priority and permission levels
  • A DMA helps to keep data flowing efficiently through the data paths and can enhance I/O throughput
  • A high clock speed facilitates brute-forcing control and signal tasks while still maintaining real-time performance
  • A register-based architecture facilitates moving data through the stream

A register-based architecture is certainly preferable to an accumulator-based architecture. Back when the cost of silicon was much more expensive and process technology sizes were specified in whole numbers, an accumulator was an expensive piece of real estate. About the time microcontrollers started being fabricated in 0.8-micron technology, the first general market register-based microcontrollers were introduced. This alleviated the bottleneck of the a

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.