Using simulation tools for embedded systems software development: Part 1

Simulation as a tool has been used for a long time in many areas ofscience and technology. The use of weather system simulation to predictthe weather and mechanical system simulation to predict the behavior ofaircraft in flight are assumed as given today. In car design, the useof virtual crash testing improves the safety of cars while radicallyreducing the number of physical prototypes that need to be built andtested.

Simulation is used whenever trying things in the physical worldwould be inconvenient, expensive, impractical or plain impossible.Simulation allows experimenters to try things with more control overparameters and better insight into the results. It reduces the cost ofexperiments, and makes it possible to work with systems that do not yetexist in physical form. It cuts lead times and improves productquality. In a sense, we use simulation and virtual systems becausereality sucks.

Ironically, while simulation is almost universally implemented assoftware on computers,the use of simulation to develop computer software itself is stillquite rare.

In this three-part series, we will discuss how simulation technologycan be used to develop embedded systems and the software component ofthese projects in particular.

Part 1 will cover simulation of the world around a computer board,while Part 2 will cover theparticulars of simulating a computer board andits software, as well as how to bring all the pieces togetherto form a complete solution. Finally, Part 3 will delve into some moreconcrete examples of simulation solutions in actual use and thebenefits obtained from them.

Simulating a Computer System
An embedded computer system can be broken down into five main parts:

* The computer board itself: the piece of hardware containing oneor more processors, executing the embedded software.

* The software running on this computer board. This includes notjust the user applications, but also the boot ROM or BIOS, hardwaredrivers, operating system, and various libraries and middlewareframeworks used to implement the software functions.

* The communications network or networks that the board is connectedto, and over which the software communicates with software on othercomputers.

* The environment in which the computer operates and that itmeasures using sensors and affects using actuators.

* The user interface exposed to a human user of the system.

Figure1

For the rest of this series of articles, we will refer to the systembeing simulated as the target system (common with traditionalcross-compilation nomenclature), and the development PC or workstationas the host.

Obviously, not all target systems feature all of the parts listedabove, but most feature most of them. A simulation effort for anembedded system can focus on just one of the parts. It is quite commonto mix simulated and physical parts, to achieve “partial reality.”

Abstraction vs. Detail
A key insight in building simulations is that you must always make atrade-off between simulator detail and the scope of the simulatedsystem.
 Looking at some extreme cases, you cannot use the same level ofabstraction when simulating the evolution of the universe on a grandscale as when simulating protein folding.

You can always trade execution time for increased detail or scope,but assuming you want a result in a reasonable time scale, compromisesare necessary.

A corollary to the abstraction rule is that simulation is a workloadthat can always use maximum computer performance (unless it is limitedby the speed of interaction from the world or users).

A faster computer or less detailed model lets you scale up the sizeof the system simulated or reduce simulation run times. In general, ifthe processor in your computer is not loaded to 100%, you are notmaking optimal use of simulation.

The high demands for computer power used to be a limiting factor forthe use of simulation, requiring large, expensive, and raresupercomputers to be used.

Today, however, even the cheapest PC has sufficient computationpower to perform relevant simulations in reasonable time. Thus, theavailability of computer equipment is not a problem anymore, andsimulation should be a tool considered for deployment to every engineerin a development project.

Simulating the Environment
Simulation of the physical environment is often done for its own sake,without regard for the eventual use of the simulation model by embeddedsoftware developers. It is standard practice in mechanical andelectrical engineering to design with computer aided tools andsimulation.

For example, control engineers developing control algorithms forphysical systems such as engines or processing plants often buildmodels of the controlled system in tools such as MatLab/Simulink andLabview.

These models are then combined with a model of the controller underdevelopment, and control properties like stability and performanceevaluated. From a software perspective, this is simulating thespecification of the embedded software along with the controlledenvironment.

For a space probe, the environment simulation could comprise a modelof the planets, the sun, and the probe itself. This model can be usedto evaluate proposed trajectories, since it is possible to work throughmissions of years in length in a very short time.

In conjunction with embedded computer simulations, such a simulatorwould provide data on the attitude and distance to the sun, the amountof power being generated from solar panels, and the positions of starsseen by the navigation sensors.

When the mechanical component of an embedded system is potentiallydangerous or impractical to work with, you absolutely want to simulatethe effects of the software before committing to physical hardware. Forexample, control software for heavy machinery or military vehicles arebest tested in simulation.

Also, the number of physical prototypes available is fairly limitedin such circumstances, and not something every developer will have attheir desk. Such models can be created using modeling tools, or written in C or C++ (which is quitepopular in practice).

In many cases, environment simulations can be simple data sequencescaptured from a real sensor or simply guessed by a developer.

It should be noted that a simulated environment can be used for twodifferent purposes. One is to provide “typical” data to the computersystem simulation, trying to mimic the behavior of the final physicalsystem under normal operating conditions.

The other is to provide “extreme” data, corresponding to boundarycases in the system behavior, and “faulty” data corresponding to brokensensors or similar cases outside normal operating conditions. Theability to inject extreme and faulty cases is a key benefit fromsimulation.

Simulating the Human User Interface
The human interface portion of an embedded device is often alsosimulated during its development. For testing user interface ideas,rapid prototyping and simulation is very worthwhile and can be done inmany different ways. One creative example is how the creator of theoriginal Palm Pilot used a wooden block to simulate the effect ofcarrying the device.

Instead of building complete implementations of the interface of aTV, mobile phone, or plant control computer, mockups are built inspecialized user interface (UI) tools, in Visual Studio GUI builder ona PC, or even PowerPoint or Flash.

Sometimes such simulations have complex behaviors implemented invarious scripts or even simple prototype software stacks. Only when theUI design is stable do you commit to implementing it in real code foryour real device, since this typically implies a greater programmingeffort.

In later phases of development, when the hardware user interface andmost of the software user interface is done, a computer simulation of adevice needs to provide input and output facilities to make it possibleto test software for the device without hardware.

This kind of simulation runs the gamut from simple text consolesshowing the output from a serial port to graphical simulations of userinterface panels where the user can click on switches, turn knobs, andwatch feedback on graphical dials and screens.

A typical example is Nokia's Series 60 development kit,which provides a virtual mobile phone with a keypad and small display.Another example is how virtual PC tools like VmWare, Parallels,and VirtualPC map the display,keyboard, and mouse of a PC to a target system.

In consumer electronics, PC peripherals are often used to providelive test data approximating that of a real system. For example, awebcam is a good test data generator for a simulated mobile phonecontaining a camera.

Even if the optics and sensors are different, it still providessomething better than static predetermined images. Same for soundcapture and playback ” you want to hear the sound the machine ismaking, not just watch the waveform on a display.

Simulating the Network
Most embedded computers today are connected to one or more networks.These networks can be internal to a system; for example, in arack-based system, VME , PCI, PCI Express, RapidIO, Ethernet, I2C,serial lines, and ATM can be used to connect the boards. In cars, CAN, LIN, FlexRay, and MOST buses connect bodyelectronics, telematics, and control systems. Aircraft control systemscommunicate over special bus systems like MIL-STD-1553 , ARINC 429, and AFDX.

Between the external interfaces of systems, Ethernet runninginternet standards like UDP and TCP is common. Mobile phones connect toheadsets and PCs over Bluetooth, USB, and IR, and to cellularnetworks using UMTS, CDMA2000, GSM, and other standards.

Telephone systems have traffic flowing using many differentprotocols and physical standards like SS7, SONET , SDH, and ATM. Smartcards connectto card readers using contacts or contact-less interfaces. Sensor nodescommunicate over standard wireless networks or lower-power, lower-speedinterfaces like Zigbee.

Thus, existing in an internal or external network is a reality formost embedded systems. Due to the large scale of a typical network, thenetwork part is almost by universally simulated to some extent.

You simply cannot test a phone switch or router inside its realdeployment network, so you have to provide some kind of simulation forthe external world. You don't want to testmobile phone viruses in the live network for very practicalreasons.

Often, many other nodes on the network are being developed at thesame time. Or you might just want to combine point simulations ofseveral networked systems into a single simulated network.

Figure2

Levels of Simulation
Network simulation can be applied at many levels of the networkingstack. The picture in Figure 2 above showsthe most common levels at which network simulation is being performed:

The mostdetailed modeling level is the physical signal level.
Here, the analog properties of the transmission medium and how signalspass through it is modeled.This makes it possible to simulate radiopropagation, echoes, and signal degradation, or the electronicinterference caused by signals on a CAN bus. It is quite rarely used inthe setting of developing embedded systems software, since it complexand provides more details than strictly needed.

Bit streamsimulation looks at the ones and zeroes transmitted on a bus or othermedium.
It is possible to detect events like transmission collisions onEthernet and the resulting back-off, priorities being clocked onto aCAN bus, and signal garbling due to simultaneous transmissions in radionetworks. An open example of such a simulator is the VMNetsimulator for sensor networks.

Considering the abstraction levels for computer system simulationdiscussed below, this is at an abstraction level similar tocycle-accurate simulation. Another example is the simulation of theprecise clock- by-clock communication between units inside asystem-on-a-chip.

Packettransmission passes entire packets around, where the definition of apacket depends on the network type.
In Ethernet, packets can be up to 65kB large, while serial linesusually transmit single bytes in each “packet”. It is the networksimulation equivalent of transaction-level modeling , asdiscussed below for computer systems.

The network simulation has no knowledge of the meaning of thepackets. It just passes opaque blobs of bits around. The software onthe simulated system interacts with some kind of virtual networkinterface, programming it just like a real network device.

This level is quite scalable in terms of simulation size, and isalso an appropriate level at which to connect real and simulatednetworks. Common PC virtualization software like VMware operates atthis level, as do embedded-systems virtualization tools from Virtutech, Virtio , and VaST. A deeper discussion can befoundhere.

Ignoring theactual structure of packets on the network.
Networks are often simulated at the level of network protocols likeTCP/IP.The simulated nodes use some socket-style API to send trafficinto a simulated network rather than a real network.

Such a simulation becomes independent of the actual medium used, andcan scale to very large networks. The networkresearch tool NS2 operates at this level, for example. It isalso anatural network model when using API-level simulation of the software,as discussed below.

Applicationprotocol simulation simulates the behavior of network services andother nodes.
Tools simulate both of the network protocols used and the applicationprotocols built on top of them. Such simulation tools embodysignificant knowledge of the function of real-world network nodes ornetwork subsystems. They offer the ability to test individual networknodes in an intelligent interactive environment, a concept often knownas rest-of-network simulation. Vector Informatik's CANtools are a typical example of suchtools.

Some high-level simulations ofnetworked systems work at the level of application actions.
In this context, we do not care about how network traffic is delivered,just about the activities they result in. It is a common mode whendesigning systems at the highest level, for example in UMLmodels.

Figure3

The level of abstraction to choose depends on your requirements, andit is often the case that several types of simulators are combined in asingle simulation setup. The picture in Figure 3 above shows a complex setupthat forms a superset of most real-world deployments and whichincludes:

* Some simulated nodes running the real software for the embeddedsystem.

* A rest-of-network simulator providing the illusion of many morenodes on the network.

* A simple traffic generator that just injects packets according tosome kind of randomized model.

* An instrumentation module that peeks on traffic without beingvisible on the network, showing the advantage of simulation ininspection.

* A connection to the real-world network on which some real systemsare found, communicating with the simulated systems.

* Real-world network test machines are the types of specializedequipment used today to provide testing of physical network nodes.Thanks to a real-network bridge, they can also be used with simulatedsystems.

To read Part 2 go to Bringing all thepieces together
To read Part 3, go to Realworld examples of software simulation

Jakob Engblom works as a businessdevelopment manager at Virtutech.He has a PhD in Computer Systems from Uppsala University, and hasworked with programming tools and simulation tools for embedded andreal-time systems since 1997. You can contact him atjakob@virtutech.com. For other publications by Jakob Engblom, see www.engbloms.se/jakob.html.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.