Are you leveraging multicore processors effectively? -

Are you leveraging multicore processors effectively?

Today's latest microprocessors, and the standard for the future, contain multiple processor cores. The number of cores available are expected to grow steadily, despite the fear of Amdahl's Law regarding a limit on speedup from parallelizing a process or the euphoria associated with Gustafson's Law, which suggests that the overall amount of work that can be done can continue to increase as the number of processor cores increases.

With this in mind, the transition to multicore processors is shifting the burden of innovation to developers who are compelled to produce software that can exploit the parallel multicore architecture. Industrial OEMs are left with the quandary as to whether they need to throw out their old product designs–around which they and their customers have made significant investments, and start fresh–or whether they should compromise on features and price by continuing to support legacy hardware platforms.

Parallelization is only part of the problem. When it comes to embedded systems, a bigger issue is segmentation of the system. The laws put forth by Amdahl, Gustafson, and others address only the raw performance metrics for a homogeneous mix of tasks. Embedded systems are typically composed of elements that have very different, hardware-related tasks to perform. Consequently, in addition to performance, real-time embedded systems developers need to consider more mundane metrics like: availability of hardware, system cost, footprint, power consumption, determinism, and development effort.

Operating-system and tools providers are rising to the challenge of supporting multicore chips, but most are viewing the multicore processors as a pool of resources that can be deployed interchangeably to perform general purpose tasks. By ignoring Amdahl and Gustafson and taking a different look at how multiple cores can be used, embedded system designers don't need to compromise or start over. An alternate approach for many embedded systems is to dedicate processor cores to specific system functions. OEMs can run their legacy code, with little or no modification, on a dedicated processor core that runs independently of those cores that support new application features.

The segmentation of tasks in many embedded systems is typically associated with multiple operating environments. General-purpose operating systems (OSes) excel at supporting database structures such as one would find in recipe management or process monitoring, whereas real-time OSes excel at responding to machine-directed inputs with minimal delay. The traditional way to serve these diverse computing needs has been to incorporate multiple separate computing environments in a system–for example a discrete imaging subsystem, a stand-alone motion-control subsystem, and a human-machine interface.

With the availability of multicore processors, system hardware costs can be substantially reduced by hosting different OS environments on different cores and eliminating redundant power supplies, RAM, and other hardware that's present in multiplatform computing systems. Tasks dedicated to a processor core can respond to real-time events with virtually zero delay because they don't have to share CPU cycles with human-directed tasks. Time-critical functions supported by legacy code can be preserved by hosting that code on a dedicated core.

But the software needed to enable multiple independent OSes to run on different cores in a reliable manner is not trivial, leading some to say that there's a real software crisis in embedded computing, driven by the need to get the maximum performance/dollar from multicore processors. A small group of software companies is focused on enabling this.

The key to making these OSes work is to exploit the virtualization hardware that's built into the processor. Intel Architecture processors, for example, provide special hardware to facilitate sharing I/O in a controlled manner.

Moving to a multicore PC architecture environment gives OEMs, who may have used a mix of processor architectures in the past, access to more cost-effective hardware platforms and up-to-date interfaces for communication and I/O, like USB and PCI Express. It also provides the ability to more easily incorporate new communication protocols for interaction with external systems and to adapt more sophisticated data reporting methods. Embedding the PC architecture into a control or instrumentation product provides the additional benefit of enabling the application-development-software environment to run directly on the target hardware, simplifying development effort and saving time.

The availability of multicore processors promises to yield big benefits for those OEMs who choose to adopt the model of multiple OSes on multicore chips. Such an approach will enable embedded system designers to add new features to their applications, gain access to powerful software development environments, and preserve past investments in intellectual property.

Paul Fischer is a senior technical engineer at TenAsys. You can reach him at

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.