Advertisement

The basics of using MCAPI in multicore-based designs

August 17, 2014

Colin Walls-August 17, 2014

MCAPI is a standardized application program interface (API) for communication and synchronization between closely distributed cores and/or processors in embedded systems, defined and maintained by the Multicore Association (MCA). This article outlines what MCAPI is, how it works and how it may be deployed in multicore systems.

Multicore
From the software perspective, there are two types of multicore system: symmetrical multiprocessing (SMP) and asymmetrical multiprocessing (AMP). An SMP system is built on a multicore chip that has multiple identical CPUs; a single operating system (a special variant designed for SMP) runs on the multiple cores and distributes work between them. An AMP system may be built on any type of multicore chip; the CPUs may be identical or there may be a mixture of architectures. Each CPU has its own operating system (or may not have one at all – the code is running on “bare metal”) and these need not all be the same. So, for example, an AMP system may include cores running Linux and others running a real-time operating system (RTOS).

With an SMP system, there is no issue with communication between cores, as the SMP operating system takes care of communication between tasks, threads, and/or processes regardless of which core they are running on; one task simply needs to, for example, ‘post in a mailbox’ and another task can receive the data.

On an AMP system, a separate layer of software is needed to facilitate communication between the cores. This is the role of MCAPI.

What is MCAPI?
MCAPI is a communications API, similar to the familiar sockets API used in networking. An application uses standardized function calls to send and receive data to and from any core in the system. The specification does not dictate how data is physically transferred - shared memory, Ethernet, etc. It specifies only the user’s API-level expectations when executing the respective routines, which enables source code compatibility across multiple operating systems.

What MCAPI is not
The important thing to note is that MCAPI is not a protocol specification, which is, by definition, an implementation issue. No interoperability between different vendors’ MCAPI implementations would be anticipated. However, all application source code should be totally portable between different MCAPI implementations.

MCAPI concepts and terminology
Some of the key MCAPI concepts may feel familiar to anyone with knowledge of networking. The first is a domain, which may be likened to a subnet in networking terms. A system that uses MCAPI includes one or more domains, and each domain includes a number of nodes. Each node can only belong to one domain, so there is a true hierarchy.

A node is an abstract concept, but may be thought of broadly as a stream of code execution. So it might be a process, thread, task, core, or one of a number of other possibilities. The exact nature of a node is specified for a given MCAPI implementation.

An endpoint is a destination to which messages may be sent or to which a connection may be established, rather like a socket in networking. A node may have multiple endpoints; each endpoint has a unique identifier, which is the tuple <domain ID, node ID, port ID>. The creating node receives from an endpoint, but any node may send to one.

MCAPI communications
Some inter-processor communications protocols require a full TCP/IP stack to exchange data, creating a bloated memory footprint. MCAPI, on the other hand, does not require TCP/IP and is much more lightweight. Another major benefit of MCAPI is scalability at the application level. As more threads are added and additional communication points are required, transactions are handled seamlessly through the use of the MCAPI primitives. Furthermore, MCAPI provides a reliable interface for data transfer, ensuring data is delivered as expected by the application.

In the most basic MCAPI model, each core is represented as an individual node in the system. When a node needs to communicate with another node, it creates an endpoint for sending or receiving data, similar to a TCP/IP socket. MCAPI primitives are then used to issue calls to send or receive data on the endpoint, and the underlying hardware driver moves the data accordingly.

In this example, Node 0 is running Nucleus, and Node 1 is running Linux. Endpoints 1 and 2 send and receive one stream of data. Endpoints 3 and 4 send and receive other data independently. Under the hood, data may be transmitted via a shared memory driver using internally managed buffers. This interrupt-driven method ensures that each buffer transmitted will be received in the proper order and delivered to the application for processing.



Data transfer
MCAPI offers a lot of flexibility for the transfer of data. Broadly there are three options:

Messages are datagrams (chunks of data) sent from one endpoint to another. No connection needs to be established to send a message. This is the most flexible form of communication, like UDP in networking, where senders and receivers may be changing along with priorities.

A packet channel is a first-in, first-out, unidirectional stream of data packets of variable size, sent from one endpoint to another after a connection has been established.

A scalar channel is similar to a packet channel, but processes single words of data, where a word may be 8, 16, 32, or 64 bits of data.

Connection-based communication potentially removes the message header and route discovery overhead and thus is more efficient for larger volumes of data.

API call example
MCAPI is fairly small, with calls in five categories addressing the key functionality required for node-to-node communication. Although there are not a great many functions in MCAPI, it would not be useful to cover the whole API here, as that information is freely available in the MCAPI specification, available from the Multicore Association website. ADD LINK.

However, here is one example that will give you a flavor of the calls:

   void mcapi_msg_send(
      MCAPI_IN mcapi_endpoint_t send_endpoint,
      MCAPI_IN mcapi_endpoint_t receive_endpoint,
      MCAPI_IN void* buffer,
      MCAPI_IN size_t buffer_size,
      MCAPI_IN mcapi_priority_t priority,
      MCAPI_OUT mcapi_status_t* mcapi_status
   );


This call sends a message from one endpoint to another. It is a blocking call and only returns when the buffer may be reused by the application. A non-blocking variant is also available.

The endpoints are identified by the parameters send_endpoint and receive_endpoint. The memory, in which the application has placed the message, is referenced by the pointer buffer, the size of which (in bytes) is specified by buffer_size. The priority of the message is indicated by the parameter priority. A value of zero commonly represents the highest priority level, but this is implementation specific. The pointer mcapi_status points to a variable where the result of the call will be placed.

On return from this API call, the variable referenced by mcapi_status will normally contain the value MCAPI_SUCCESS. But a number of error conditions are possible; these are described in the MCAPI specification.

MCAPI vs in-house
An implementation of MCAPI is much more scalable than a “roll your own” approach. If additional threads need to send or receive data across cores, they create a new endpoint and go about their business. The implementation is also more reliable, as each buffer transmitted will be received in a predictable order.

Conclusions
Although the implementation of software for multicore designs remains challenging, the MCAPI specification offers a good solution to the key issue of inter-CPU communications and is an increasingly popular standard.

Colin Walls has over thirty years experience in the electronics industry, largely dedicated to embedded software. A frequent presenter at conferences and seminars and author of numerous technical articles and two books on embedded software, Colin is an embedded software technologist with Mentor Embedded [the Mentor Graphics Embedded Software Division], and is based in the UK. His regular blog is located at blogs.mentor.com/colinwalls. He may be reached by email at colin_walls@mentor.com

Loading comments...