Simulating Embedded Systems for Software Development: Part 3 -

Simulating Embedded Systems for Software Development: Part 3


In this final part of our three-part series on simulating embeddedsystems, we will cover some examples from the real world, based on ourexperience with Virtutech Simics in a number of different projects andmarkets.

Clearly Cutting Time-to-Market:Servers
The development of server computers might seem a strange place to starta list of embedded application examples, but it provides many valuableinsights into how software-intense systems can be developed.

While in the embedded world, software has only recently come todominate development costs and customer value perceptions; this hasbeen true for a long time in the server market.

Also, the difference between firmware on an embedded system andfirmware in a server is quite minimal in practice. It is all aboutinitializing hardware, power-on-self-test, and loading code onto anumber of main processors to bootstrap the system.

Simics has been used tosimulate new server generations for low-level software development andoperating-system porting. A key goal was to reduce the time from firsthardware availability to first successful power-on and boot. This timehas traditionally been on the order of many months or even years, timespent ironing out hardware-software interface misunderstandings, aswell as simple bugs in hardware and software.

With simulation, this time has been reduced by three to nine months, depending onthe project. The key benefit brought by simulation is the ability tostart testing low-level software and the hardware-software interfaceearly on, using a virtual target built from the specification of thehardware design, rather than the physical hardware. The net effect isillustrated in Figure 6, below:


Simulation enables software and hardware development to proceed inparallel (which is a very commonmotivation for simulation solutions and virtualized softwaredevelopment in general). The simulator also makes it possible todecouple software and hardware development schedules, since thesoftware does not need to wait for working physical hardware to bedelivered. The simulator is a simpler deliverable.

What is also interesting is that we have seen the actual softwaredevelopment time to get shorter. The ability of the virtual environmentto reproduce bugs and provide a more convenient debug setup savesdevelopment time.

Mundane tasks like downloading a new version of a boot code intotarget FLASH is much faster on a virtual target, it is just a fast copyfrom disk into the simulator. Less time is spent figuring out if a bugis due to flaky hardware or software problems.

In cases like this, where the hardware is being revised as thesoftware is being developed, the simulator is also developed anddelivered in incremental drops to the software team. Even a limitedinitial virtual target can be used for initial software work.

As time goes on, more parts of the target system are added to thesimulation. Changes in the hardware design and specification arecommunicated in an executable form by updating to the simulator. As aside effect, this improves the communication between hardware andsoftware teams.

Technically, this was done using virtualized software development,targeting 64-bit RISC processors and the devicesneeded to build a server. Since the code depends on hypervisor andsupervisor modes in the processors, and device drivers have to copewith endianness issues introduced by PCI and other interconnects, anyother approach to simulation would be limited in value.

Maximum execution speed was essential, since booting anindustrial-strength operating system like Solaris, AIX, Linux, or Windows is a matter of many billionsof instructions, even for a single-processor version. Simulatorconfigurations range from a single backplan control processor tohundreds of main processors and many gigabytes of simulated memory.

Looking at our overview of simulation from the beginning of thisseries, the system is a fairly self-contained computer. There is noactual controlled environment, and the user-interface is limited toserial consoles or telnet sessions for most work. The main stimuluscomes from the software loaded into memory and on disks, and from thesystem configuration, rather than from external sources.

Networks of Networks: TelecomSystems
Telecom nodes in the core cellular and fixed telephony networks of theworld are among the most complex embedded systems in existence. Atypical system is rack-based, and the basic building block is theboard.

Each board contains one or more processors, usually someapplication-specific hardware, and a software stack based on areal-time operating system. Several boards are combined into a rack,and several racks form a single system.

Each rack has its own backplane network, and then extension boardsconnect each rack to the other racks in the system. The backplanenetwork is usually Ethernet , ATM, or PCI/PCI Express.Since these systems are mission-critical, all boards and all networksare at least dual-redundant. We thus see a hierarchical systemcontaining a large number of networks, with each level of hierarchyactually being a pair of redundant networks.

Simics has been used to virtualize several such systems, and theyare, according to our knowledge, the most complex embedded systems eversimulated. Virtual systems containing tens of processors are used dailyin these virtual systems, and configurations containing hundreds oftarget processors are not uncommon.

As shown in Figure 7, below, the virtual system setup for a generic telecom system containingtwo racks (we leave out the dual redundancy of the networks to make thepicture easier to read):


To increase simulation speed, not all boards are fully virtualrunning the complete software stack. Some are stubs where the boardsare simulated at their interface to the backplane network.

The choice of full virtualization versus stub simulation depends onthe usage scenarios for the virtual system. For example, digital signalprocessing boards need only provide some simple signals to the controlsoftware for most test cases. This makes it possible to collapse alarge farm of powerfulDSPsinto a single very small and fast model.

In other cases, the DSP boards are fully virtualized to provide atest bed for the actual DSP software and its interaction with thecontrol software component. The extension boards and backplane switchboards are also usually stubbed, since their functionality is reallysubsumed by a typical packet-level network simulation.

In order to provide data to process, the virtual system also hasnetwork connections to external data sources. Usually, virtual nodesare connected to physical test systems just like physical nodes, andthese physical test systems provide both rest-of-network and packetgeneration-style network data.

The “environment” for such systems is thus the test equipmentproviding rest-of-network simulation of the phone network. Aninteresting aspect here is the timing between the virtual and physicalsystems. As the virtualized system provides features like total systempause and also exhibits a load-dependent execution speed, it will runat a varying pace compared to a physical counterpart.

Sometimes it will be slower, and sometimes faster (since thesimulator efficiently compresses idle time, it is easy to run manytimes faster than the real world for a lightly loaded system). Thismeans that the physical test system has to either listen to the timeexported from the virtual system, or allow some slack in deadlines toavoid false negatives in tests.

The user interface of the system is just serial consoles.Theoretically, since each board provides two or more serialconnections, there can be quite a few available at any one time. In areal system, most of these are not really physically accessible, but inthe simulation, it is conveniently possible to talk to any serialconsole on any board.

Other benefits generated by the simulator is reduced setup andconfiguration times (writing a script is much faster than connectingphysical boards and cables), and superior insight into how the systemworks. In the virtual environment, it is possible to inspect thecomplete state of all boards in the system at a single synchronizedinstance in time.

Simulating telecom systems has proven that virtualized softwaredevelopment technology can handle even the largest and most complexembedded computer systems.

One type of such system that we model in Simics contains models ofmore than ten different types of PowerPC , PowerQUICC, and DSP processors,and more than thirty different types of boards. Some of the deviceshave memory maps containing thousands of registers.

There are standard ASSPslike Freescale PowerQUICC II and III chips, custom ASICs,discrete Ethernet controllers, and FPGAs. Operating systemslike Linux, VxWorks, OSE, and several in-house operating systems haveall been run on the virtual hardware, including mixed simulationscontaining with different operating systems running on differentboards.

Military Systems: Focus on Mechanics
A very different example from the above two comes from the field ofmilitary systems. In this case, the system developers already had alarge model in place of the mechanics of the target system developed inMatLab/Simulink and other tools.Some software was being developed using this simulator, by interfacingcode directly to the mechanical system model (which is a standardfeature of MatLab).

No real operating system or other component of the software stackwas involved, just the user code algorithms. This limited thevalidation value of the code tests, since the code was not reallycompiled with the right compilers on run on the actual type ofprocessor found in the target system.

In order to take testing in simulation further and reduce thedependency on physical development boards and physical prototypes, fullvirtual nodes were integrated into the existing simulation. This wasachieved using a special-purpose simulation middleware package thateased the integration of simulators of various makes and styles. Usingsuch a middleware has some impact on performance and simulation systemcomplexity, but it was already in place as the integration hub of thesystem.

The middleware made it possible to run various parts of thesimulation on separate host computers to increase overall simulationspeed. Note that the simulation speed-up from such a distributedsimulation solution is limited by the amount of synchronizationnecessary to maintain a coherent view of the simulated system state.

Inside the virtual target computer boards, device models foranalog/digital converters connect to the middleware to obtainenvironment data and provide actuator data. Fully virtual targetmachines are used to test the actual code being used in the physicaltarget. Among other uses, this makes it possible to collect codecoverage statistics without using instrumented code. The simulatedsolution provides the insight needed to collect traces of execution,without any change to the target code.

The most notable features of this solution are the varying level ofsimulation of the compute nodes, and the large investment already inplace in the mechanical model.

The user interface and networking aspects are basically nonexistent,since the computer boards are used to control the mechanical system andnot to interact with users. Simulation was an established methodologyin the project, and the simulation system was extended to cover alsothe computer part of the system.

Custom and Standard Computer Parts
It is worth noting that simulation of the computer part of an embeddedsystem is useful regardless of the degree of custom hardware used.While much tool attention is focused on designers and users of customsystem-on-chip devices, simulation solutions and virtual hardwareplatforms are just as useful for systems built using standard parts.

The value of a virtual computer board and simulated system to thesoftware developer does not really depend on whether the processor chipis designed in-house or not, it comes from the use of simulation andvirtual systems as a methodology to make software development fasterand better.

Even when hardware is readily available or even in legacy state,simulation solutions bring benefits for debugging and testing thatphysical hardware cannot match.

Simulation is a very powerful technique for engineering embeddedsystems in general, and developing the software component inparticular. Depending on where the risk and expense lies in systemdevelopment, it typically makes sense to build simulators for one ormore system parts.

Experience indicates that using simulation makes it possible todevelop more robust embedded systems faster, letting hardware andsoftware development schedules overlap and providing better debug andtesting facilities for software.

There are many robust software packages available in the markettoday that help you construct system simulations, and with powerful PCsavailable very cheaply, there is no good reason not to investigatesimulation for your current or next project.

To read Part 1 in this series, go to  “Simulatingthe World.”
To read Part 2 in this series, go to “Bringingthe  pieces together.”

Jakob Engblom works as abusiness development manager at Virtutech.He has a PhD in Computer Systems from Uppsala University, and hasworked with programming tools and simulation tools for embedded andreal-time systems since 1997. You can contact him For other publications by Jakob Engblom, see

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.