Managing the coming explosion of embedded multicore virtualization

Markus Levy, Multicore Association, Surender Kumar, Nokia Siemens Networks, and Rajan Goyal, Cavium Networks

April 18, 2012

Markus Levy, Multicore Association, Surender Kumar, Nokia Siemens Networks, and Rajan Goyal, Cavium Networks

IPCs and Virtualization
A critical part of this compartmentalization is maintenance of security. If two processes run in two different computers, it’s much harder to have one talk to the other than if they run on the same hardware.

That increased ease of inter-process communication (IPC) becomes a problem if you don’t want them talking together. So it’s essential that virtualization provide isolation between the guests so that they all play nicely with each other – or, more accurately, alongside each other.

Within the embedded world, however, most systems have historically been much simpler, ironically, due to the more complex nature of an embedded system. Unlike servers, which, more or less, look the same across vendors and applications, embedded systems vary wildly in form factor, processor, I/O, hardware architecture, and software architecture.

The most complex systems – largely packet processing – have been handled using an asymmetric multiprocessing (AMP) setup where a control-plane core, for example, may run a full-up Linux OS while a series of other data-plane cores may run one or more instances of a small real-time OS (RTOS) or even no OS at all, executing in a so-called “bare-metal” configuration.

More recently, however, progress has been made towards the ability to outfit an embedded system with a single OS like Linux that can treat different cores differently. Processes can be assigned to different cores more or less exclusively, and it’s possible to limit the amount of OS overhead on a core-by-core basis so that some cores can be made to perform as if they were running on bare metal while still having access to some rudimentary OS services.

Virtualization in such a complex embedded system must provide all of the benefits of allocation, sharing, and isolation required for a server. But, in many cases, the multiple “programs” actually work together in the execution of a higher-level system, and so they may need to communicate in a way that would not be necessary – or even desirable – between, say, different tenants in a cloud-computing server. So here the virtualization services would need to be able both to isolate for security and manage communication for the system mission.

Depending on the application, the ability to scale a platform may be important, and this can take place in one of two dimensions: up and out.“Scaling up” refers to the ability to make the most efficient use of existing resources in support of additional guests. “Scaling out” refers to the ability to add additional computing resources while adjusting the load balancing to make best use of them (Figure 2 below).


Click on image to enlarge.

Figure 2. Scaling multicore up and down.

Virtualization requirements are growing beyond communications, including high-end storage and imaging systems. Even in the consumer world, an increasing number of sophisticated portable devices like smartphones as well as stationary devices like set-top boxes have become complex enough to warrant an intervening virtualization layer.

< Previous
Page 2 of 3
Next >

Loading comments...

Most Commented

Parts Search Datasheets.com

KNOWLEDGE CENTER