Multiprocessing in your future -

Multiprocessing in your future

Click image to go to digital edition.

Several processors in one system is no news for embedded designers.

Like most ideas discovered by marketing, multiprocessing had been around a long time before someone came up with the word multicore and declared it the Next Big Thing. Let’s set aside the long history of multiprocessor computing. In the embedded world, multiprocessing has been not so much a response to a temporary slowdown in the rate of improvement in uniprocessor machines as it has been a consistent architectural strategy. Certainly since the appearance of microcontrollers it has often been easier and more sensible to just give a task its own MCU rather than try to fit it into an already-loaded central processor. Multicore? Been doing it for decades.

Today there are more hardware choices. We can isolate a task on a separate MCU. Or, if we are already using a multicore SoC, we can put the task on another core. Often system and application tasks end up on separate CPUs, for example. For some tasks, such as baseband signal processing or encryption calculations, the SoC may already provide dedicated hardware accelerators. And these days, with the improving cost/performance of FPGAs, designers should give serious consideration to developing a custom accelerator for a really compute-intensive or otherwise difficult task.

One of the lessons we have learned from this long history is the importance of partitioning. How we define and allocate tasks can make a huge difference in design effort, the difficulty—or even the feasibility—of debug, and the performance of the completed system.

This is not just a matter of putting a task on the hardware best suited to its inner loops. It is also a question of interfaces: the points at which tasks touch each other. As David Kalinsky points out this month in his article on lock-free programming , partitioning can influence how tasks share resources. And the manner of sharing influences the devices you must use to implement the interfaces.

Ideally, you could partition your system into tasks that could run entirely independently of each other, so there would be no interface design involved beyond the start-up process. But this seems to be possible only rarely. More often, there are interdependencies: situations in which one task can alter data another task must consume. In these cases, as Kalinsky discusses, just how the tasks will share data becomes an important decision.

Perhaps the easiest structures to understand are self-timed data-flow machines, in which each task waits for its data to appear, does its job, and goes back to sleep. In more complex situations, a single data structure may be continuously available for read and write to many different concurrent tasks.

Multiprocessing doesn’t make these issues go away. It makes them more complex, in that you cannot assume that just because one task is running, the others are suspended. While multiprocessing is not a new idea in our world, it is not a solved problem, either.

Ron Wilson is the editorial director of design publications at UBM Electronics, including EDN, ESD magazine,, the Embedded Systems Conferences, and EE Times' DesignLines. You may reach him at

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.