Performance overhead of KVM for Linux 3.9 on ARM Cortex-A15 - Embedded.com

Performance overhead of KVM for Linux 3.9 on ARM Cortex-A15

Virtualization has been an important disruptive technology in the server and enterprise space for a number of years now. Within the embedded server and platform segment virtualiza- tion is gaining interest but still considered a poor match for the specific requirements of a tightly coupled, high performing software system sharing multiple hardware accelerator and digital signal processing units.

However we observe that things are changing and that with the advent of cheap, powerful many-core chips the nature of what constitutes such a system is also expected to change dramatically.

To date much of the research and engineering effort around embedded virtualization has been focused on micro-kernel based hypervisor solutions. Indeed with their special attention to high-bandwidth, low latency and isolation properties for robustness, micro-kernel based hypervisors do indeed seem to be a good match to the specific requirements of a classic embedded system.

However as the Linux operating system takes an ever increasing share of embedded server platforms and displaces the more classic real-time OS’s we feel it is necessary to understand how Linux and its specific ecosystem of software can be used to act as an embedded hypervisor supporting guest Linux instances with real-time requirements.

We see our work in this paper, based on the ARM’s A15 architecture and KVM, as a first step in understanding this area.

KVM is a part of the Linux kernel that makes Linux capable of running not just regular application binaries, but also able to run an unmodified kernel binary in a special kind of process. The kernel inside the process is called a guest, and the main kernel is called host. The guest is scheduled by the host as a normal process, interleaved with other processes, but it cannot open files or do other syscalls to the host.

Reasons to run an an application in a KVM process may be to provide isolation of data or fault – the guest does not see any of the host’s processes or resources unless so configured, and it cannot crash the host system. Or the reason could be a need to run legacy software that can not run directly on the host.

Using KVM or other kinds of virtualization comes at a cost, and different hardware platforms have different costs and bottlenecks depending on how suited they are for some specific virtualization technology. A bottleneck may for instance be if a platform does or does not require the execution to pass through the hypervisor when moving between user space to kernel space.

Likewise, modifying the virtual memory translation may or may not incurr overhead. And one system may support device initiated DMA to the virtual memory while another may require the hypervisor to completely virtualize and emulate the devices.

A number of simple performance measurements on network, CPU and disk speed were done on a dual ARM Cortex- A15 machine running Linux inside a KVM virtual machine that uses virtio disk and networking.

Unexpected behaviour was observed in the CPU and memory intensive benchmarks, and in the networking benchmarks. The average overhead of running inside KVM is between zero and 30 percent when the host is lightly loaded (running only the system software and the necessary qemu-system-arm virtualization code ), but the relative overhead increases when both host and VM is busy. We conjecture that this is related to the scheduling inside the host Linux.

To read more of this external content download the complete paper from the open online archives at Mälardalen University.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.