The idea of a hypervisor in a powerful computer is well known. It facilitates the simultaneous use of multiple operating systems and provides a virtualized environment in which unmodified legacy software may be deployed. Hypervisors have a place in modern embedded systems too. This article considers the characteristics of an embedded hypervisor, what facilities it can offer and some typical applications.
Some technologies should not really exist. They do, however, because they address a specific need. Typically, such technologies stretch something to make it perform in a way that was not originally intended. An example would be the fax machine. In a paper-based office environment, there was a frequent need to move documents from A to B. Initially, this resulted in the mail. But fax was an ingenious way to use phone lines to deliver a similar result. As soon as email became widespread, fax disappeared almost overnight. Domestic broadband is another example – ancient copper wiring is used to deliver digital connectivity at speeds that would have been regarded as quite impossible just a few years ago. Hypervisors are yet another “should not exist” technology that has found a firm place in the world.
What is a hypervisor?
Broadly speaking, a hypervisor is a software layer that enables multiple operating systems to be run simultaneously on a single hardware platform. They have been used for decades on mainframes, more recently on desktop computers, but are now beginning to be very relevant to embedded developers.
Hypervisors are not really a new technology – the first recognizable products were introduced on mainframe computers nearly 50 years ago. The incentives at that time were to make the very best use of costly resources. The expensive hardware needed to be used efficiently to be at all economic and downtime was expensive. Software investments needed to be protected, so facilitating seamless execution on new hardware was attractive. An interesting irony is that IBM's early virtualization software was distributed in source form (initially with no support) and modified/enhanced by users; this was many years before the concept of open source was conceived.
In the context of a modern embedded system, what is the benefit of running multiple OSes on one piece of hardware, bearing in mind that this introduces significant complexity? The most important answer is security. A hypervisor provides a very strong layer of insulation and protection between the guest operating systems, ensuring that there is no possibility of one multi-threaded application interfering with another. A secondary, but still very significant, motivation to run multiple OSes is IP reuse. Imagine that there is some important software IP available for Linux that you want to use in your design. However, your device is real time, so an RTOS makes better sense. If multicore is not an option (as that is another way to run multiple OSes on one device), using a hypervisor is the way forward, so that you can run Linux and your RTOS.
Types of hypervisor
Hypervisors essentially come in two flavors, which are imaginatively named Type 1 and Type 2. Type 1 hypervisors run on bare metal; Type 2 require an underlying host operating system. For embedded applications, Type 1 makes most sense for the majority of applications.
Multicore – SMP and AMP
The statement that a hypervisor enables multiple OSes on a single hardware platform implies that this meant a single processor. In fact, many products support the use of multiple CPUs, with the hypervisor distributing resources. To explore this further needs a little consideration of multicore basics.
From a software perspective, there are essentially two types of multicore architecture: Symmetric MultiProcessing (SMP) and Asymmetric MultiProcessing (AMP). With SMP, a single operating system runs across all the CPU cores, which must be of identical architecture. The OS needs to be specifically designed to run in an SMP system. In an AMP system, each CPU runs its own OS (or none at all); the CPUs need not be of the same architecture.
Hypervisors and multicore
All embedded systems have finite resources, which need to be managed. It may be argued that this is the crux of what embedded software development is all about. For the purposes of this discussion, we will consider a hypothetical example system which has a single serial port. There are various ways that this port may be managed, depending on the configuration of software and CPU hardware.
On a single-core system, the operating system can manage access to the serial port and arbitrate between multiple tasks that might wish to make use of it. There are lots of ways that this might be achieved and this is a well-trodden path.
If the system has multiple, identical cores, it might be useful to configure it as SMP – a single operating system instance running across all the cores. SMP versions of many RTOS products (like Nucleus) are available, and Linux with SMP support is also an option. This approach is ideal if the application benefits from the OS being able to distribute CPU power as needed. The OS can manage access to the serial port in much the same way as on a single core system.
On many systems, SMP is unattractive because more precise control is required or the system has so many cores that performance degradation is likely. On others, SMP is impossible because the cores are not identical. In these cases, an AMP configuration, with a separate OS instance on each core, makes sense. This presents a challenge: how might the single serial port be managed?
If all the cores in the system are suitable for running a hypervisor, a possibility is to run it on all of them, with each core's OS considered to be a “guest”. Access to the serial port may then be managed by the hypervisor.
If one or more of the cores are not capable of running a hypervisor (because they are lower performance processors), an alternative option may be available (for example, the Heterogeneous Multicore Framework – HMF – from Mentor Embedded). This facilitates management of the smaller core by the hypervisor running elsewhere (on a more powerful CPU on the same chip). Thus, the hypervisor can manage access to the serial port by all the cores in the system, whether they are running the hypervisor or HMF.
Embedded hypervisor applications
There are broadly three application areas in which an embedded hypervisor finds its place:
In this context, there is the possibility for infotainment software, instrument cluster control and telematics to all run on a single multicore chip. As a mixture of OSes is likely to be needed – RTOS for instrumentation and GPS and Linux for audio, for example – a hypervisor makes a lot of sense.
For industrial applications (factories, mines, power plants etc.) there is commonly a need for real time control (RTOS) and sophisticated networking (Linux). In addition, in recent years there has been an increasing concern about cyber-attacks on or other introduction of malware into control systems. A hypervisor is the ideal way to separate systems and maintain security.
Medical systems introduce some new challenges. Typically, there is a mixture of real-time (patient monitoring and treatment control) and non-real-time (data storage, networking and user interface) functionality, so a hypervisor initially looks attractive. The patient data confidentiality is critical, so the security side of a hypervisor becomes significant. Lastly, the ability to completely separate the parts of the system that require certification (normally the real time parts), make a hypervisor compelling.
Colin Walls has over thirty years experience in the electronics industry, largely dedicated to embedded software. A frequent presenter at conferences and seminars and author of numerous technical articles and two books on embedded software, Colin is an embedded software technologist with Mentor Embedded (the Mentor Graphics Embedded Software Division), and is based in the UK. His regular blog is located at: http://blogs.mentor.com/colinwalls. He may be reached by email at email@example.com