Make virtualization work for mobile devicesOver the last five years virtualization has evolved from an obscure technology to become a key enabler of enterprise server and desktop applications. More recently, virtualization has begun to play a comparable pivotal role in embedded development and deployment. Segments leading this wave of adoption are mobile telephony, telecommunications and network infrastructure, and secure embedded computing. In all these areas, developers and integrators look to virtualization to address needs for increased reliability and security, to ease maintenance and forward-migration of legacy code, and to optimize hardware utilization for both multi-OS processor partitioning on a single CPU and managing execution across multiple CPUs and multicore processors.
Despite the obvious appeal of virtualization to embedded software developers and original equipment manufacturers (OEMs), adoption of the technology could stall due to inherent limitations in virtualization platform architecture. This article examines those limitations and how they can be overcome by a different approach to building embedded virtualization software.
Limitations and Challenges to Embedded Virtualization
Let's examine the areas in which virtualization can fall short in embedded design, including those areas targeted by advocates of virtualization as ripest for adoption.
Managing Software Complexity
The most strident clarion call for adoption of virtualization in embedded systems arises from dramatic increases in the size and complexity of embedded software. For the past decade, embedded software content has been doubling annually, such that today embedded systems boast source code bases of tens of millions of lines of code, equaling and sometimes surpassing the volume of enterprise program source. The challenge of managing and maintaining this volume of code is compounded by the inherently complex, multi-threaded and latency-sensitively nature of embedded software.
Advocates of embedded virtualization cite this present and growing complexity as the prime motivator for adopting virtualization platforms. Unfortunately, virtualization falls short in addressing this primary challenge to embedded software development. While segmenting and isolating software components into distinct virtual machine (VM) containers can enhance reliability, VM-level granularity is too coarse to make a serious dent in addressing creeping complexity. Guest OSs and hosted applications running in separate VMs can actually increase overall complexity, especially when virtualization platform software lacks insight and integration into embedded systems architecture and is not harmonized with embedded software engineering practices.
Isolation vs. Integration
The clearest and most immediate benefit that embedded applications realize from virtualization is improved reliability and security from strict, hardware-enforced separation among guest operating systems (Linux, WindowsCE, RTOS, etc.) and other execution contexts (lightweight in-house kernels, device drivers, etc.). This isolation helps prevent unintended corruption of code and data across independent functional areas in intelligent devices (e.g., baseband radio stacks and user interface code in mobile phones), and also erects barriers to malicious access by code downloaded by end users.
The robustness afforded by virtualization to embedded applications, however, runs counter to traditional embedded design practices. Such practices emphasize efficient data sharing among embedded software components, but are obstructed or even disabled by strict partitioning of code into virtual machines. Moreover, without streamlined communication among code running in different virtual machines (VMs), virtualization can degrade embedded systems performance to unacceptable levels and impact existing integration between OEM and third-party software.
Embedded systems software routinely involves complex mixes of multi-process and multi-threaded programs. Both mobile and stationary systems can require scheduling and synchronization of hundreds of tasks; even quiescent or suspended equipment can still boast dozens of running threads. Moreover, unlike much enterprise and desktop software, embedded applications can involve rich prioritization schemes and disciplines (e.g., Rate Monotonic Analysis). Designing, debugging and tuning scheduling priority, execution policy and real-time event response are system-wide activities that demand fine-grained visibility and control.
Imposition of virtualization runs counter to this detailed, system-wide perspective. Segregation of software components places each in a virtual "black box," with its own OS-specific scheduling priority and policy characteristics. Without the ability to synchronize and normalize scheduling priority and policy across VMs, embedded software devolves into a collection of "wheels within wheels." Each OS in each VM runs according to its own scheduling scheme, and prioritization across virtual machines occurs at the level of the entire VM or guest OS, not at the required global level of individual tasks or other schedulable entities. This VM-level opacity not only runs counter to common embedded design practices, but it can also completely impair the development, debugging and deployment of even nominally complex embedded systems.
Optimizing energy utilization in embedded systems stems from requirements specific to design domains: in mobile devices like phones and media players, energy management yields longer battery life and helps OEMs differentiate their products in crowded marketplaces. In stationary equipment, like networking equipment (routers, gateways, security appliances) and consumer electronics devices (television sets, DVRs, IP phones, and durable appliances), energy management helps lower electric bills and meet emerging needs for energy conservation.
Energy management in intelligent devices involves a mix of hardware and software techniques. Most often it involves the OS kernel recognizing both reduced user interaction (fall off in keyboard/keypad and other input device events) and quiescent program states (waiting for external events or long pauses in execution profile). When systems enter such idle states, energy management software can selectively shut down power-hungry devices like LCD displays, scale back CPU and bus clocks, and lower operational voltages. Conversely, that same software must able to ramp performance back to full-throttle levels to service new events and user input.
Effective energy management requires extensive cooperation among OS, device drivers and even application software " it is a global discipline. Unfortunately, as with scheduling, opacity across VM contexts prevents effective energy management: one quiescent guest OS lacks visibility into other VMs needed to make energy management policy decisions. In complementary fashion, hypervisors lack sufficient understanding of the internal states of the guest OSs they manage, preventing energy management at the virtualization platform level.