Dealing with the design challenges of multicore embedded systems - Embedded.com

Dealing with the design challenges of multicore embedded systems

The introduction of mainstream dual-core processors signals a majorshift in the 'shape' of all computing platforms. Previously, almost allembedded software could be written with the assumption that a singleprocessor was the execution vehicle; and where multiple processors wereinvolved, they were either relatively loosely-coupled and could beconsidered separately, or were collaborating in easily parallelizedcomputations.

While dual-core machines will change this model somewhat, we canexpect the number of cores to grow exponentially, roughly doubling witheach processor generation. Furthermore, chips of the future can beexpected to exhibit increasingly higher degrees of heterogeneity interms of cores, interconnect, hardware acceleration, and memoryhierarchies.

While the multiple cores provides the potential to processapplications in parallel, the software picture becomes much morecomplex and the industry’s challenge will be figuring out how toefficiently harness this processing capability.

Moving from proprietary specs tocommon standards
Currently in the embedded industry, most, if not all, of the multicorehardware and software implementations are based on proprietarysolutions. The necessity to move beyond parallel computing and SMPparadigms and towards heterogeneous embedded distributed systems willlikely drive changes in how embedded software will be created.

Thus, it will drive changes into development tools, run-timesoftware, and languages. Programming such systems effectively willrequire new approaches. Given that software is a large investment formany companies, it is normal to desire software portability across arange of multicore systems. A number of barriers must be addressed toenable a better software paradigm.

To cope with this impending change, it will be helpful for theindustry to agree on common, simple, and efficient abstractions forsuch concurrent systems to allow us to describe key aspects ofconcurrency in ways which can be simply and directly represented as aset of APIs.

In other words, the multicore ecosystem (comprised of chip vendors,semiconductor IP providers, RTOS, middleware, compiler, and developmenttool vendors, as well as application developers) must agree on theappropriate interfaces to support interoperability and therefore,enable quicker time to market.

Dealing with mixed operatingsystems
Specific areas of programming multicore systems that must be addressedare task and resource management and communication and synchronizationthat are required for embedded distributed systems. This need stemsfrom the reality that such systems cannot rely on a single operatingsystem — or even an SMP operating system — for such services.

It can be expected that such heterogeneous multicore systems willemploy a range of operating systems, from applications to real-timeOSes, across multiple cores, and therefore will have resources thatcannot be managed by any single operating system. This situation isexacerbated further by the presence of hardware accelerators which donot run any form of operating system, but which must interact withprocesses that are potentially running on multiple operating systems ondifferent cores.

The Multicore Association, has been formed to serve as an umbrellaorganization for multicore related discussions, standards, and supportfor participants.

To help overcome the challenges described in the preceding text, theMulticore Association is working on four separate, but somewhat relatedstandards: the Resource Management API (RAPI), the Communication API(CAPI), and the Transparent Inter Process Communication (TIPC)protocol, specially designed for intra-cluster communication, andmulticore debug mechanisms that support interoperability between tools.

The Rap on RAPI
The primary goal of the RAPI is to provide a standardized API for themanagement, scheduling, and synchronization of processing resources.The Multicore Association generically refers to these processingresources as ‘work entities’ because these resources can include manydifferent types of functions (i.e. processors, hardware accelerators,DMA engines) and memory resources.

In a sense, the RAPI is similar to pre-existing standards, notablyPOSIX pThreads. However pThreads differs in key areas, most notably insupport for heterogeneous multicore and memory architectures (Table 1).

Table1. To engender rapid understanding and adoption, the MulticoreAssociation RAPI will use a highly simplified subset of calls found inreal and defacto standards, such as pThreads, with extensions wherenecessary to support heterogeneous multicore architectures.

The RAPI embodiment should support features for state management,scheduling (including pre-emption where allowed by tasks and processingresources types), context management (stack creation/allocation,destruction/de-allocation, save and restore), and basicsynchronization. A further challenge for RAPI is that it should becomplimentary to the CAPI and existing operating systems (either as avirtualization layer or as a part of the existing kernel).

CAPI: Messaging and Synchronization
The CAPI specifies an API, not an implementation, for the purposes ofmessaging and synchronization in concurrent embedded software systems.As such, the CAPI must support many of the well-known qualitiesdescribed for distributed systems.

However, due to certain assumptions about embedded systems, the CAPIis only required to support a subset of the distributed systemsqualities defined by noted authors such as Tannenbaum, and also bystandards such as CORBA. This subset of qualities is necessary becauseof the specific needs of embedded systems such as tighter memoryconstraints, tighter task execution time constraints, and high systemthroughput.

The target systems for CAPI will span multiple dimensions ofheterogeneity (e.g., core heterogeneity, interconnect heterogeneity,memory heterogeneity, operating system heterogeneity, software toolchain heterogeneity, and programming language heterogeneity).

While many industry standards already exist for distributed systemsprogramming, they have primarily been focused on the needs of (1)distributed systems in the large, (2) SMP systems, or (3) specificapplication domains (for example scientific computing.) Thus, the CAPIhas similar but more highly-constrained goals than these existingstandards with respect to scalability and fault tolerance, yet has moregenerality with respect to application domains.

Figure1. A logical view of CAPI and RAPI in a typical multiprocessor design.

While the primary focus of CAPI is embedded systems, it is theintent to, when possible, provide enough flexibility to create morefully featured functionality on top of CAPI to allow the incorporationof the CAPI supported systems in more widely distributed environments.

The CAPI embodiment should support both control and data transport,have a small footprint, and require only minimal resource managementfunctionality (such as provided by the RAPI embodiment) while providingenough flexibility and/or modularity to support the increasingheterogeneity that multi-core systems will impose.

Besides the interface challenges described in this article, thisorganization is also working on improving hardware debug for multicoreplatforms. But this is just the beginning. Multicore designers will befaced with challenges of code partitioning and system-level benchmarksthat go beyond the standard SMP benchmarks that are available today.

MarkusLevy is president of theMulticore Association.

The Multicore Association providesa neutral forum for vendors who are interested in, working with, and/orproliferating multicore -related products, including processors,infrastructure, devices, software, and applications. Companies in allof these fields, as well as OEMs that rely on multicore implementationsfor their products, are invited to participate.

For additional information on this topic, go to Moreabout multicores, multiprocessing and tools.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.