Managing the coming explosion of embedded multicore virtualization

Markus Levy, Multicore Association, Surender Kumar, Nokia Siemens Networks, and Rajan Goyal, Cavium Networks

April 18, 2012

Markus Levy, Multicore Association, Surender Kumar, Nokia Siemens Networks, and Rajan Goyal, Cavium NetworksApril 18, 2012

Virtualization has been a staple of the server world for years. The very promise of a general computing platform that can do many different things for many different users all at the same time creates the need for managing how that’s done, and virtualization has taken on that role.

The embedded world, on the other hand, for the most part, hasn’t required virtualization. This is largely because embedded devices are, almost by definition, not general-purpose platforms. Their tasks could be hand-crafted at the lowest levels. But as the complexity and range of embedded systems have grown, now including applications such as networking equipment and smart phones, the need for some form of virtualization technology is becoming more evident.

As with most technologies, the embedded embodiment of virtualization is more complex than its server cousin for the very reason that embedded technology tends to involve more work: the number of variations and combinations and permutations of features and functions can easily spin out of control, making it hard for architects and designers to find their way around the space.

Furthermore, the rapid proliferation of multicore devices has created an increased applicability of virtualization technology. For that reason, the Multicore Association has convened a Virtualization Working Group. Participation from a wide variety of embedded players will help establish standards that will address the gamut of needs from the diverse embedded community.

Why virtualize?
There have been two main drivers for virtualization within servers. The most transparent one is the desire to make a single computer look like multiple computers to a group of users. There is a clear need for this in the cloud computing world, commonly referred to as “multi-tenanting.” A user connects to the cloud and gets a “machine,” but that machine is virtual, and it may only be a portion of an actual hardware machine. The relationship between the “machine” the user perceives and the underlying hardware is completely opaque, and, if done properly, irrelevant.

The other major driver for server virtualization is the need to run legacy code on new machines. Such old programs may need out-of-date operating systems or look for resources and I/O that may not exist on the new system. Of course, no one will want to get anywhere near the code (if the source code is even available), so virtualization helps “wrap” the program in a way that lets the program think it’s on its original target platform.

In either case, you have a single computer managed at the most fundamental level by a single operating system (OS), with multiple “guest” programs and/or operating systems running above the OS. It’s the virtualization layer that hosts these guests and provides the translations needed to keep everyone happy (Figure 1 below).

Figure 1. Multicore virtualization layers.

At a lower level, there are two fundamental duties that virtualization performs: allocation and sharing. In the first case, you have resources that must be allocated to the different guest programs.

Those may be memory or interrupts or cores. Virtualization delineates sandboxes within which each guest can play. In the second case, you have I/O and other dedicated resources that each guest must believe it has exclusive access to. Virtualization’s role here is to maintain the charade of exclusivity.

< Previous
Page 1 of 3
Next >

Loading comments...

Most Commented

Parts Search Datasheets.com

KNOWLEDGE CENTER