Managing the coming explosion of embedded multicore virtualization - Embedded.com

Managing the coming explosion of embedded multicore virtualization

Virtualization has been a staple of the server world for years. The very promise of a general computing platform that can do many different things for many different users all at the same time creates the need for managing how that’s done, and virtualization has taken on that role.

The embedded world, on the other hand, for the most part, hasn’t required virtualization. This is largely because embedded devices are, almost by definition, not general-purpose platforms. Their tasks could be hand-crafted at the lowest levels. But as the complexity and range of embedded systems have grown, now including applications such as networking equipment and smart phones, the need for some form of virtualization technology is becoming more evident.

As with most technologies, the embedded embodiment of virtualization is more complex than its server cousin for the very reason that embedded technology tends to involve more work: the number of variations and combinations and permutations of features and functions can easily spin out of control, making it hard for architects and designers to find their way around the space.

Furthermore, the rapid proliferation of multicore devices has created an increased applicability of virtualization technology. For that reason, the Multicore Association has convened a Virtualization Working Group. Participation from a wide variety of embedded players will help establish standards that will address the gamut of needs from the diverse embedded community.

Why virtualize?
There have been two main drivers for virtualization within servers. The most transparent one is the desire to make a single computer look like multiple computers to a group of users. There is a clear need for this in the cloud computing world, commonly referred to as “multi-tenanting.” A user connects to the cloud and gets a “machine,” but that machine is virtual, and it may only be a portion of an actual hardware machine. The relationship between the “machine” the user perceives and the underlying hardware is completely opaque, and, if done properly, irrelevant.

The other major driver for server virtualization is the need to run legacy code on new machines. Such old programs may need out-of-date operating systems or look for resources and I/O that may not exist on the new system. Of course, no one will want to get anywhere near the code (if the source code is even available), so virtualization helps “wrap” the program in a way that lets the program think it’s on its original target platform.

In either case, you have a single computer managed at the most fundamental level by a single operating system (OS), with multiple “guest” programs and/or operating systems running above the OS. It’s the virtualization layer that hosts these guests and provides the translations needed to keep everyone happy (Figure 1 below ).

Figure 1. Multicore virtualization layers.

At a lower level, there are two fundamental duties that virtualization performs: allocation and sharing. In the first case, you have resources that must be allocated to the different guest programs.

Those may be memory or interrupts or cores. Virtualization delineates sandboxes within which each guest can play. In the second case, you have I/O and other dedicated resources that each guest must believe it has exclusive access to. Virtualization’s role here is to maintain the charade of exclusivity.

IPCs and Virtualization
A critical part of this compartmentalization is maintenance of security. If two processes run in two different computers, it’s much harder to have one talk to the other than if they run on the same hardware.

That increased ease of inter-process communication (IPC) becomes a problem if you don’t want them talking together. So it’s essential that virtualization provide isolation between the guests so that they all play nicely with each other – or, more accurately, alongside each other.

Within the embedded world, however, most systems have historically been much simpler, ironically, due to the more complex nature of an embedded system. Unlike servers, which, more or less, look the same across vendors and applications, embedded systems vary wildly in form factor, processor, I/O, hardware architecture, and software architecture.

The most complex systems – largely packet processing – have been handled using an asymmetric multiprocessing (AMP) setup where a control-plane core, for example, may run a full-up Linux OS while a series of other data-plane cores may run one or more instances of a small real-time OS (RTOS) or even no OS at all, executing in a so-called “bare-metal” configuration.

More recently, however, progress has been made towards the ability to outfit an embedded system with a single OS like Linux that can treat different cores differently. Processes can be assigned to different cores more or less exclusively, and it’s possible to limit the amount of OS overhead on a core-by-core basis so that some cores can be made to perform as if they were running on bare metal while still having access to some rudimentary OS services.

Virtualization in such a complex embedded system must provide all of the benefits of allocation, sharing, and isolation required for a server. But, in many cases, the multiple “programs” actually work together in the execution of a higher-level system, and so they may need to communicate in a way that would not be necessary – or even desirable – between, say, different tenants in a cloud-computing server. So here the virtualization services would need to be able both to isolate for security and manage communication for the system mission.

Depending on the application, the ability to scale a platform may be important, and this can take place in one of two dimensions: up and out.“Scaling up” refers to the ability to make the most efficient use of existing resources in support of additional guests. “Scaling out” refers to the ability to add additional computing resources while adjusting the load balancing to make best use of them (Figure 2 below ).


Clickon image to enlarge.

Figure 2. Scaling multicore up and down.

Virtualization requirements are growing beyond communications, including high-end storage and imaging systems. Even in the consumer world, an increasing number of sophisticated portable devices like smartphones as well as stationary devices like set-top boxes have become complex enough to warrant an intervening virtualization layer.

Wrestling with diverse virtualization needs
While the benefits of virtualization may be clear, the fact remains that its use means that an extra software layer – or several – is being introduced between the program and the underlying computing resources. That generally means a sacrifice of performance as that layer intervenes in many – even most – operations. A guest program will have a virtual address map; the virtualization system will need to map each memory access. Any I/O use must be mediated through the virtualization layer. Permissions must be checked to make sure the different processes stay within their prescribed bounds and outside of their proscribed bounds.

For this reason, processor manufacturers are increasingly designing hardware support for virtualization into the processors themselves. Such assist features may reflect the needs of memory management, I/O allocation, application-specific accelerators, scaling up, and scaling out.

Such support, of course, represents a cost increase because it consumes silicon. So there’s a need to balance and trade off of such features in the context of the intended application of the processor.

On the other side of that equation, system architects will want to choose processors that have virtualization hooks well-matched to their applications. While it may be straightforward for processor datasheets to list all of the specific hardware virtualization features, there’s a conceptual gap between a feature and the application to which it’s best suited.

The Multicore Association, an industry consortium formed in 2005 to provide standards and guidance to engineers and companies crossing into the multicore world, is stepping in to provide some structure that will guide both producers and users of multicore processors and other supporting components.

Chaired by Cavium’s Rajan Goyal and Nokia-Siemens Networks’s Surender Kumar, the Virtualization Working Group has been formed in order to assemble the wide range of voices that must be heard so that that the needs of all of the different hardware, software, and system players are met.

There are actually two tasks being undertaken by this working group. The first is referred to as the Software Virtualization Strategies (SVS) subgroup. The goal here is to provide guidance to engineers trying to migrate software from older AMP systems onto the heterogeneous single-OS architectures that are emerging now.

The other subgroup is tasked with identifying Multicore Virtualization Profiles (MVP). This gets directly to the challenges faced by makers and users of components in targeting virtualization features towards specific applications. The idea is to identify a series of classes of application – for example, networking, mobile infrastructure, data center, and cloud computing – and identify the virtualization features required for each of them.

This lets processor designers target more effectively those features they need in order to serve their intended application more effectively. While a system designer will choose a processor based on a number of high-level requirements, this profile analysis will provide additional information confirming whether or not the selected processor is well suited to virtualization for the intended application.

At present, the working group is soliciting participation by as many stakeholders as possible to ensure adequate representation; the process is at a critical stage, since results are expected within the second or third quarter of 2012. Anyone desiring input into any of these activities are urged to get contact the working group at www.multicore-association.org.

These efforts will provide scaffolding onto which the varied needs of virtualization can grow so that they can be easily comprehended and managed by everyone that provides, uses, or otherwise interacts with some element of virtualization in servers and embedded systems.

Rajan Goyal is distinguished engineer at Cavium, Inc. His role is chief architect for Content Processing and Algorithmic Search Technology at Cavium. Prior to Cavium, he was founder and chief technology officer at Iota Networks, a startup company based out of San Jose, Ca. developing intellectual property in the field of content processing. His bachelor degree is in Mechanical Engineering from the Thapar Institute of Engineering and Technology, India and an MS in Computer Science from Stanford University.

Surender Kumar works as a software specialist at Nokia Siemens Networks. He has over 16 years of experience in the networking industry. He has wide experience in the areas of multicore processors, data plane and protocol software, middleware and virtualization. He currently chairs the Virtualization Workgroup at the Multicore Association. He has a Masters degree in computer science and engineering.

Markus Levy is president of The Multicore Association and chairman of the Multicore Developer's Conference. He is also the founder and president of EEMBC. 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.