In Part 1 of this series, wecovered how to simulate the environment, user-interface, and networkenvironment of an embedded system. In this second part, we will add thefinal piece of the puzzle, the computer board, and discuss how all thepieces can be brought together.
Simulating the Computer System andits Software
Now we get to the core of the system: the board containing one or moreprocessors, and the software running on that board. Since the computerboard hardware and its software are so closely related, we consider thesimulation of them together.
The central goal is to use a PC or workstation to develop and testsoftware for the target embedded system, without resorting to usingphysical hardware. Figure 4, below, illustrates the most common levels of simulation from a softwareperspective.
|Figure4: Common levels of simulation from a software perspective|
Testing against the API
It sometimes makes sense to program and test embedded software againstan API also found on the workstation. Java programs and programs usingsome specific database API or a standardmiddleware like CORBA can sometimes be tested inthis manner.
The assumption is that the behavior of these high-level APIs will besufficiently similar on the host and target systems. This is notnecessarily true; for example, the behavior of a Java machine on amobile phone is not likely to be the same as on a desktop, due tomemory and processing power constraints, and different input and outputdevices.
It is also possible to simulate some aspects of user programs withno model of the target system at all. For example, control models asdiscussed above are interesting to study regardless of the precisedetails of the target system software stack.
Using UML and state charts to model pieces ofsoftware makes it possible to simulate the software on its own, withoutconsidering much of the target at all. Minimal assumptions are madeabout the properties of the underlying real-time operating system andtarget in these models.
A common and popular level of simulation is API-level simulation of atarget operating system. This is also known as host-compiledsimulation, since the embedded code is compiled to run on the host andnot on the target system.
This type of simulation is popular since it is conceptually simpleand easy to understand, and is something that an embedded developer canbasically create on his or her own in incremental steps. It also allowsdevelopers to use popular programming and debug tools available in adesktop environment, like Visual Studio, Purify and Valgrind.
It also has several drawbacks. First, it often ends up being quiteexpensive to maintain the API-level simulation and separate compilationsetup required.
Second, the behavior of the simulated system will differ in subtlebut important details from the real thing. Details like the precisescheduling of processes, size of memory, behavior of memory protection,availability of global variables, and target compiler peculiarities canmake code that runs just fine in simulation break on the real target.
Third, it is impossible to make use of code only available as atarget binary. Fourth, complex actions involving the hardware andlow-level software, such as rebooting the system, are very hard torepresent.
Most embedded operating-system vendors offer some form of host-compiledsimulation. For example, WindRiver's VxSimand Enea's OSE Soft Kernel fit in thiscategory. There are also many in-house implementations of API-levelsimulations, both for in-house and externally sourced operatingsystems.
The experience with such tools range from the very successful, whereminimal debugging needs to be done on the target after the switch-overto the real target, to utter failures where some approximation in theAPI-level simulation did not work, and the code had to be extensivelydebugged on the physical target.
Trying to resolve some of the issues with API-level simulation,sometimes the hardware-independent part of the embedded operatingsystem is used, together with a simulation of the hardware-dependentparts.
This is basicallypara-virtualization, where the device drivers andhardware-abstraction level of an operating system is replaced withsimplified code interacting with the host operating system rather thanactual hardware.
A paravirtual solution provides better insight into the behavior ofthe operating system kernel and services, but is still compiled to runon the host PC. It also requires access to the operating-system sourcecode, and a programming effort corresponding to creating a newboard-support package (BSP).
Attacking the hardware/softwareinterface
Finally, the problem can be attacked at the hardware/software interfacelevel. Here, a virtual target system is created which runs the samebinary stack (from boot firmware to device drivers and operatingsystem) as the physical target system.
This is achieved by simulating the instruction set of the actualtarget processor, along with the programming interface of theperipheral devices in the system and their behavior. The technical termfor this is full-system simulation, since you are indeed simulating thefull system and not just the processor.
The crucial and defining property of this type of simulation is thatall software is the same as on the physical system, using the samedevice drivers and BSP code. It is common practice to develop thedevice drivers and BSP ports using virtual target systems in caseswhere the physical hardware is not yet available.
We also avoid the need to maintain a separate build setup and buildchain for simulation, since it is the same as used for the physicaltarget. Note that with virtual target systems, target properties like memory- management units ,supervisor and hypervisor modes, endianness, word size, andfloating-point precision are simulated and visible.
The main drawback of virtual targets is the initial work involved increating the hardware model. However, practical experience shows thatmost projects can quite quickly recoup the initial investment. Thereare tools on the market that help make it easier, less expensive, andmore effective to deploy virtual models.
Vendors like ARM, CoWare, Synopsys/Virtio,VaST, and Virtutech offer tools to createvirtual systems appropriate for software development. For those wholike open-source, the QEMU project offers a range oftargets from which to start developing.
The role of Electronic SystemsLevel design
Full-system simulation is a complex field in itself, with multiplelevels of abstraction possible. At one end, there are tools coming fromthe EDA side aiming to provide some support for early softwaredevelopment for new chips.
This is often called ESL(Electronic Systems Level) design. Unfortunately, thesimulation models are often quite slow since they attempt to model theprecise cycle-by-cycle execution of the hardware. Execution speeds inthe range of a few million instructions per second (MIPS) is the bestyou can hope for.
For large-scale software development and test, less detail and morespeed is needed. To this end, transaction-level modeling andinstruction-accurate processor simulators are used.
Such simulation systems provide a complete and correctimplementation of the programming view of a system, but approximate thetiming. The fastest systems use either just-in-time compilation fromtarget to host or hardware virtualization if the target and host and ofthe same architecture.
With such techniques, software execution speeds from few hundredMIPS up to several thousand MIPS are possible, basically matching thespeed of the physical hardware. This enables the use of virtualized software development,a software development methodology where most of the software work isdone on the simulator rather than on physical hardware.
Where do instruction-set simulatorsfit?
Standalone instruction-setsimulators (ISS) are common and established in embedded compilerand debug toolsets. Such simulators usually simulate only theuser-level instruction set of a processor, and lets simple programsthat do not really do I/O be run on the host.
Modeling of peripheral devices is typically limited to providingsequences of bytes to be read from certain memory locations. Operatingsystems can not be run on such simulators, since necessary more complexdevices like timers and serial ports are missing.
The simulators are also typically quite slow, since there is littlevalue gained from making them faster. IDEs from vendors like IAR and Green Hills are typicalexamples of development tools including a basic ISS.
As a final note, as embedded systems start using multipleprocessors and multicore processors, virtualized software developmentis one of the best ways to get a grip on the complex debugging anddiagnostics issues arising from the concurrent execution of multiplethreads of control on multiple processors. Some recent articles on thetopic include “Simulatingand debugging multicore behavior,”.”Debuggingrealtime multiprocessor systems,” and ” Dearthof tools could stall multicore onslaught.“
Putting the Pieces Together
So now we know how to simulate an embedded computer and run itssoftware, and how to simulate the environment, user interface, andnetwork in which it operates.
What remains is to tie the various pieces together so that theembedded software can indeed run on the virtual computer board, senseand control the virtual environment, obtain virtual user input, andcommunicate on the virtual network.
To achieve this, we have provided devices on the virtual computerboard corresponding to the interfaces of the physical computerhardware. For the connection to the environment, this typically meansmodeling the processor-facing programming interface of theanalog-to-digital and digital-to-analog converters and digital I/Olines.
The interface models also have an environment-facing side which isconnected to the simulation of the environment, in order to readreasonable values and provide actuation data.
In the same way, networks will be connected to the virtual computerboards using a model of some network interface device. The networksimulation can connect the virtual system to other virtual systems, toa rest-of-network model, or some other simulated or real system asdiscussed above.
User interfaces will be represented in the virtual system by a modelof the programming interface of a user-interface device like a serialport, LCD controller, keypad interface, LED, or other devices. Thatdevice model will in turn connect to a user interface simulator so thatthe programmer can interact with the system.
Figure 5, below , illustratesa typical setup with a serial port, network, and environment connectedto a virtual system simulated in the full-system style.
|Figure5: Simulation Environment|
It has to be noted that youcannot do all the work in simulation. However, practical experienceindicates that you can resolve at least 80 percent, and usually 90 to95 percent of your software problems in the virtual world.
Even for real-time systems with hard deadlines, most programmingproblems are functional in nature and amenable to virtualized softwaredevelopment solutions.
Some published data from high-end serverdevelopment indicates that several months can be saved in debugtime on hardware, compared to only starting the debug process oncehardware arrives. In the next part in this series I will examine someof simulation solutions and their benefits.
In the end, there is no getting around the fact that you have to testthings on the physical hardware eventually, since that is the actualproduct that will be shipped to customers. The credo of “test what youfly and fly what you test” has to be followed to create a workingsystem. Simulation helps you test what you fly, but it does not reallylet you fly what you test. A PC would look fairly silly in space.
However, we like to avoid a big-bang approach to hardware andsoftware integration. The more gradual the move to physical hardware,the more effective your overall system and software developmentapproach will be. This realization has led to the development ofsolutions that allow for the combination of simulated and physicalsystem components.
Control software can be run on a PC in simulation while sending outcontrol signals to the physical system. This requires that the PC hasappropriate I/O abilities, usually realized using add-in cards. Thiscreates “hardware-in-the-loop“simulators where the validity of software is tested against physicalhardware.
In distributed systems containing many nodes connected on a network,a common approach is to add a new node as a computer simulation to areal test network. This simulation can be run on a PC or onspecial-purpose simulation hardware such as the dSpaceAutobox (special hardware provides better real-time responsetimes).
Inverting this scenario, the embedded software can be run on a realcomputer while the surrounding environment remains simulated. Thisallows real-time properties and the correct function of thesoftware-computer combination to be tested, while an environment thatcannot be replicated in the lab (such as deep space) is provided bysimulation.
For networked systems, you connect a physical network node against asimulation of the rest of the network. The simulations can be run onPCs, but often they require special simulation computer systemsproviding the required compute power to respond in real-time.
Examples of such a system are the NetHawk and PolyStar network simulators, which use racks of compute and interfaceboards to provide sufficient computing and I/O ability to feed areal-world telecom node with data at real-time rates.
For machines with complex hands-on user interfaces includingsteering wheels, multi-dimensional joysticks, buttons, dials, andembedded displays, it often makes sense to simulate the environment andcomputer system together with a physical expression of the controls.
An extreme example of this are the flight simulators used fortraining pilots, which create a fairly good impression of real flying.Such control panels can be connected over computer buses like CAN orMIL-STD-1553, or by direct digital and analog connections.
Mixed physical and simulated systems are also very useful whenupdating an existing system. They make it possible to take an existingphysical system, and then add some new network nodes or replace somecomputer unit in order to test new functionality in the real systemcontext.
In the next and final installment of this series, we will discusssome concrete examples of how simulation has been used to improveembedded software development.
To read Part 1 in this series, go to “Simulating the world.”
Next in Part 3, Concrete examples ofsimulation solutions .
Jakob Engblom works as a businessdevelopment manager at Virtutech. He has a PhD in Computer Systems fromUppsala University, and has worked with programming tools andsimulation tools for embedded and real-time systems since 1997. You cancontact him at firstname.lastname@example.org. For other publications by JakobEngblom, see www.engbloms.se/jakob.html.