Dealing with the design challenges of multicore embedded systems

January 23, 2006

January 23, 2006

The introduction of mainstream dual-core processors signals a major shift in the 'shape' of all computing platforms. Previously, almost all embedded software could be written with the assumption that a single processor was the execution vehicle; and where multiple processors were involved, they were either relatively loosely-coupled and could be considered separately, or were collaborating in easily parallelized computations.

While dual-core machines will change this model somewhat, we can expect the number of cores to grow exponentially, roughly doubling with each processor generation. Furthermore, chips of the future can be expected to exhibit increasingly higher degrees of heterogeneity in terms of cores, interconnect, hardware acceleration, and memory hierarchies.

While the multiple cores provides the potential to process applications in parallel, the software picture becomes much more complex and the industry’s challenge will be figuring out how to efficiently harness this processing capability.

Moving from proprietary specs to common standards
Currently in the embedded industry, most, if not all, of the multicore hardware and software implementations are based on proprietary solutions. The necessity to move beyond parallel computing and SMP paradigms and towards heterogeneous embedded distributed systems will likely drive changes in how embedded software will be created.

Thus, it will drive changes into development tools, run-time software, and languages. Programming such systems effectively will require new approaches. Given that software is a large investment for many companies, it is normal to desire software portability across a range of multicore systems. A number of barriers must be addressed to enable a better software paradigm.

To cope with this impending change, it will be helpful for the industry to agree on common, simple, and efficient abstractions for such concurrent systems to allow us to describe key aspects of concurrency in ways which can be simply and directly represented as a set of APIs.

In other words, the multicore ecosystem (comprised of chip vendors, semiconductor IP providers, RTOS, middleware, compiler, and development tool vendors, as well as application developers) must agree on the appropriate interfaces to support interoperability and therefore, enable quicker time to market.

Dealing with mixed operating systems
Specific areas of programming multicore systems that must be addressed are task and resource management and communication and synchronization that are required for embedded distributed systems. This need stems from the reality that such systems cannot rely on a single operating system -- or even an SMP operating system -- for such services.

It can be expected that such heterogeneous multicore systems will employ a range of operating systems, from applications to real-time OSes, across multiple cores, and therefore will have resources that cannot be managed by any single operating system. This situation is exacerbated further by the presence of hardware accelerators which do not run any form of operating system, but which must interact with processes that are potentially running on multiple operating systems on different cores.

The Multicore Association, has been formed to serve as an umbrella organization for multicore related discussions, standards, and support for participants.

To help overcome the challenges described in the preceding text, the Multicore Association is working on four separate, but somewhat related standards: the Resource Management API (RAPI), the Communication API (CAPI), and the Transparent Inter Process Communication (TIPC) protocol, specially designed for intra-cluster communication, and multicore debug mechanisms that support interoperability between tools.

The Rap on RAPI
The primary goal of the RAPI is to provide a standardized API for the management, scheduling, and synchronization of processing resources. The Multicore Association generically refers to these processing resources as ‘work entities’ because these resources can include many different types of functions (i.e. processors, hardware accelerators, DMA engines) and memory resources.

In a sense, the RAPI is similar to pre-existing standards, notably POSIX pThreads. However pThreads differs in key areas, most notably in support for heterogeneous multicore and memory architectures (Table 1).

Table 1. To engender rapid understanding and adoption, the Multicore Association RAPI will use a highly simplified subset of calls found in real and defacto standards, such as pThreads, with extensions where necessary to support heterogeneous multicore architectures.

The RAPI embodiment should support features for state management, scheduling (including pre-emption where allowed by tasks and processing resources types), context management (stack creation/allocation, destruction/de-allocation, save and restore), and basic synchronization. A further challenge for RAPI is that it should be complimentary to the CAPI and existing operating systems (either as a virtualization layer or as a part of the existing kernel).

CAPI: Messaging and Synchronization
The CAPI specifies an API, not an implementation, for the purposes of messaging and synchronization in concurrent embedded software systems. As such, the CAPI must support many of the well-known qualities described for distributed systems.

However, due to certain assumptions about embedded systems, the CAPI is only required to support a subset of the distributed systems qualities defined by noted authors such as Tannenbaum, and also by standards such as CORBA. This subset of qualities is necessary because of the specific needs of embedded systems such as tighter memory constraints, tighter task execution time constraints, and high system throughput.

The target systems for CAPI will span multiple dimensions of heterogeneity (e.g., core heterogeneity, interconnect heterogeneity, memory heterogeneity, operating system heterogeneity, software tool chain heterogeneity, and programming language heterogeneity).

While many industry standards already exist for distributed systems programming, they have primarily been focused on the needs of (1) distributed systems in the large, (2) SMP systems, or (3) specific application domains (for example scientific computing.) Thus, the CAPI has similar but more highly-constrained goals than these existing standards with respect to scalability and fault tolerance, yet has more generality with respect to application domains.

Figure 1. A logical view of CAPI and RAPI in a typical multiprocessor design.

While the primary focus of CAPI is embedded systems, it is the intent to, when possible, provide enough flexibility to create more fully featured functionality on top of CAPI to allow the incorporation of the CAPI supported systems in more widely distributed environments.

The CAPI embodiment should support both control and data transport, have a small footprint, and require only minimal resource management functionality (such as provided by the RAPI embodiment) while providing enough flexibility and/or modularity to support the increasing heterogeneity that multi-core systems will impose.

Besides the interface challenges described in this article, this organization is also working on improving hardware debug for multicore platforms. But this is just the beginning. Multicore designers will be faced with challenges of code partitioning and system-level benchmarks that go beyond the standard SMP benchmarks that are available today.

Markus Levy is president of the Multicore Association.

The Multicore Association provides a neutral forum for vendors who are interested in, working with, and/or proliferating multicore -related products, including processors, infrastructure, devices, software, and applications. Companies in all of these fields, as well as OEMs that rely on multicore implementations for their products, are invited to participate.

For additional information on this topic, go to More about multicores, multiprocessing and tools.

Loading comments...

Most Commented