This “Product How-To” article focuses how to use a certain product in an embedded system and is written by a company representative.
The advances made in multi-core technology and associated middlewareallow developers to combine the best principles of multi-processing,virtualisation, real-time and hard partitioning to create a highlyoptimised execution environment for embedded applications. Here we lookat the technology impact of multi-core processors on operating systemand application software design.
Of late, innovation in processor architecture has been focused oncreating multi-core processors. These multi-core processors introducetwo or more processing cores in a single chip, thereby giving operatingsystems and applications access to increased computing power.
One of the significant advantages of these multi-core processors isthe additional computing resources without any significant increase insize and weight; previous generations of multiTextprocessingconfigurations involved two or more physical chips that requiredadditional real estate on processor boards.
The immediate benefits are obvious; applications that were designedaround uni-processor configurations can replace uni-processor coreswith dual or quad core processors. The computing power of theseconfigurations increases dramatically with no appreciable change in itsphysical configuration.
The software impact of multi-core processors is fairly immediate onoperating systems design. The OS has to adapt to support symmetricmulti-processing (SMP) or asymmetric multi-processing (AMP), two majorapproaches for support of multi-core processors.
The operating system design has to adapt in the areas of scheduling,interrupt handling, synchronisation and load balancing. Applicationprograms can also be affected by multi-core processors based on theability of the OS to provide fine-grained control of process schedulingto applications.
For example, an application can make a request to execute on aspecific processor core only. However, the increase in compute powerthrough multi-core processors can be better harnessed through anotherrecent trend in OS design, namely virtualisation.
Virtualize with binary compatibility layers
Virtualisation is a technique used to create an execution environmentfor software that is similar to the one it was originally designed for,but on a different hardware or operating system. It can be achievedusually at two levels; OS virtualisation and hardware virtualisation.
Operating System virtualisation is done using binary compatibilitylayers that run on heterogeneous operating system environments, whilepresenting an interface similar to the original OS environment. This ismost often done to achieve migration and execution of applicationsacross multiple heterogeneous operating system environments. Forexample, the ability to run Windows applications on Linux uses avirtualisation technique that simulates the behaviour of the Windowsoperating system on Linux.
Hardware virtualisation involves the emulation of the underlyinghardware capabilities to allow operating systems themselves to run in ahardware environment different from its original environment. Softwareprograms that emulate the underlying hardware capabilities are calledvirtual machines (VM) or virtual machine monitors (VMM).
A VM abstracts the capabilities of hardware and makes it availablein environ-ments different from the original hardware. Some of the wellknown virtual machines are VMware, which emulates a standard Intel x86PC architecture on a Macintosh environment, and the Java VirtualMachine (JVM), that emulates a specialised byte-code for apseudo-processor.
Hardware virtualisation can also be extended to allow multipleheterogeneous operating systems to execute on single physical machine.The ample computing resources of modern multi-core processors make thisextension possible. However, these multiple instances of heterogeneousoperating systems need to execute in a resource isolated environment,with no functional impact to other instances of operating systems. Thisis essential since they will be sharing computing resources.
Hardware virtualisation for an OS
Enabling multiple instances of heterogeneous operating systems on asingle machine involves solving technical challenges in virtualisationand resource isolation, while retaining complete binary compatibilityand acceptable level of performance.
Virtualising multiple instances of an operating system can be doneusing either full virtualisation or partial virtualisation. The virtualmachine in either case virtualises the hardware to provide the illusionof real hardware for the operating systems executing on this virtualmachine. However, both full and partial virtualisations have some keydifferences in their overall architecture, leading to a different setof trade-offs.
Full virtualisation of the underlying hardware requires virtualisingall the capabilities of the processor and board. This involves complexmanipulations of memory management and privilege levels that arecomputationally intensive on commodity processors.
This leads to performance overheads that are much higher than thenon-virtualised versions of the OS. However, the biggest benefit offull virtualisation is to allow operating systems to run unmodified,although at the cost of a significant performance overhead.
|Figure1: Virtualized OS architecture on a multi-core processor|
Partial or para-virtualisation is usually a technique where theunderlying hardware is not completely simulated in software. Thisarchitecture allows commodity operating systems to be easilyvirtualised on commodity processors, although with the requirement thatthe virtualised operating system requires code modifications to adhereto the partially virtualised architecture. However, the performance ofpartially virtualised architectures is much better than the fullyvirtualised machines, usually within a few percent of thenon-virtualised versions.
The other key requirement for running multiple operating systems inthe context of a virtual machine is the ability to isolate the physicalresources of a computer. This is achieved by time-space partitioning, aconcept used extensively in safety-critical and secure systems. In atime-space partitioned system, the virtual machine sub-divides two keycomputing resources: CPU time and physical memory.
The physical memory is divided into unique, non-overlapping ranges,and assigned to individual heterogeneous virtualised operating systems.The time scheduler allocates periods of CPU time to each virtualised OSthat is usually fixed and cyclic. This gives the illusion of exclusiveaccess to computing resources for the virtualised operating systems.The ability of the virtual machine to support time-space partitioningis a basic prerequisite for the execution of multiple virtualisedoperating systems on a single machine.
Both full and partial virtualisations support 100% binarycompatibility with the stand-alone version of the operating system. Italso allows the ability to retain the benefits of multiple addressspaces within a single operating system instance.
One significant difference between a stand-alone operating systemand a virtualised version is that the virtualised OS runs in a lessprivileged mode (user mode). This is necessary since the virtualmachine that provides the virtualised architecture is the sole entitythat is running at highest privileged level (supervisor mode). Figure 1 above shows the genericarchitecture supporting multiple heterogeneous operating systemsrunning on a virtual machine.<><><>
One of the key benefits of creating a virtualised OS architecture isthe addition of security capabilities into embedded design. Thetime-space partitioning capabilities provided in this architecture forma natural foundation for creating secure applications in embeddeddesign. The MILS architecture based on time-space partitioning designis an approach that naturally evolves from the time-space partitioningparadigm.
The MILS (Multiple Independent Levels of Security/Safety)architecture adopts the best principles of security and safety-criticaldesign to define a hard real-time, secure embedded OS that can beevaluated to the highest levels of security (EAL7) and safety assurance(DO178B), while preserving the flexibility to support diverse securitypolicies. The architecture identifies four key security policies:Information Flow; Data Isolation; Residual Information Protection, and;Damage Limitation.
Information Flow policy states that only authorised subjects canexchange information using pre-configured communication channels. DataIsolation policy states that objects can be isolated into separatepartitions, such that subjects can only gain access to objects they areauthorised to access. Residual Information Protection policy statesthat covert channels cannot exist through unintended transfer ofresidual state information. Damage limitation policy states that faultisolation is present and faults in one partition do not propagate toother partitions
The MILS architecture uses a small partitioning kernel (RTOS) thatruns in supervisor mode and provides brick-wall partitioning of memory,time and I/O resources. The partitioning kernel only provides the basicfunctionality needed to support the underlying hardware. Within eachpartition, the traditional OS functionality executes in user modecom-pletely isolated from other partitions.
The middleware and appli-cations make up the rest of the componentsthat may execute in a single partition. The MILS archi-tecture is anexample of component layering (kernel, middleware and application), andprovides a platform for virtualisation of commodity OSes. Thisarchitecture provides flexible security capabilities and can be thebasis of several secure embedded designs on multi-core processors.
|Figure2: LynuxSecure RTOS on a multi-core processor|
An example architecture that exemplifies the principles ofvirtualisation, real-time and security on multi-core processors is theLynxSecure architecture from LynuxWorks (Figure 2, above) .
The LynxSecure RTOS combines time-space partitioning and virtualisationto allow multiple, heterogeneous operating systems to execute in arobust, highly secure environment on 64bit, multi-core processors. Itallows safety-critical and secure operating systems to functionalongside non-secure operating systems without compromising the entiresystem's security, reliability and data integrity.
This separation kernel is also a virtual machine monitor that iscertifiable to Common Criteria EAL-7 Security certification (EvaluatedAssurance Level 7). This is a level of certification not attained byany known operating system to date. It is also certifiable to DO-178BLevel A, the highest level of FAA certification for mission-criticalavionics applications.
It is designed to provide a virtualised hardware interface to allowmultiple guest operating systems to run in a context of a singlephysical machine. To achieve this the separation kernel creates avirtualisation layer that maps physical system resources to each guestoperating system, thereby virtualising operating systems like Linux,Windows, and LynxOS-SE to run within ultra-secure partitions.
This virtualisation technique provides superior performance forvirtualised operating systems and its applications, while preserving100% application binary compatibility with its non-virtualisedinstance.
In addition, it guarantees resource availability, such as memory-and processor-execution resources, to each partition, so that nosoftware can fully exhaust or consume the scheduled memory or timeresources of other partitions. There is support for simultaneous use ofsystem interfaces, including multiple instances of the same ordifferent operating systems in different partitions.
A fixed-cyclic ARINC653-based scheduler to ensure that allpartitions are allocated adequate CPU time to prevent starvation forany partition, as well as dynamism in its scheduling policy to allowmaximum flexibility are additional capabilities of this architecture.
This example separation kernel provides the essential components fora complete implementation of a scalable, multithreaded and securearchitecture through support for Symmetric multi-processing (SMP) foroptimal resource utilisation and load balancing on multi-coreprocessors. It also provides additional high-end scalability and memorysupport through 64bit execution mode and addressing capabilities.
As the complexity of embedded applications continue to grow, theneed for greater computing power continues to drive advances inprocessor architecture. The emergence of multi-core processors marks astrategic inflection point in the embedded industry.
The confluence of innovation in operating system design in the areasof virtualisation, real-time and security on these newer processors isenabling new paradigms in embedded application design, the effects ofwhich will propel further advances in application design in theembedded marketplace.
The design of embedded applications is becoming a complex endeavour.The need for advanced operating systems and tools to enable applicationdesigners to take advantage of these hardware innovations has neverbeen greater. The technology issues outlined in this article shouldhelp embedded designers make appropriate choices for their embeddedsoftware needs, as the embedded industry moves into the 21st century.
Arun Subbarao is Vice President of Engineering at LynuxWorks, where he is responsible for the development of operating system and tools products,as well as consulting services.