Simulating Embedded Systems for Software Development: Part 2In Part 1 of this series, we covered how to simulate the environment, user-interface, and network environment of an embedded system. In this second part, we will add the final piece of the puzzle, the computer board, and discuss how all the pieces can be brought together.
Simulating the Computer System and
Now we get to the core of the system: the board containing one or more processors, and the software running on that board. Since the computer board hardware and its software are so closely related, we consider the simulation of them together.
The central goal is to use a PC or workstation to develop and test software for the target embedded system, without resorting to using physical hardware. Figure 4, below, illustrates the most common levels of simulation from a software perspective.
|Figure 4: Common levels of simulation from a software perspective|
Testing against the API
It sometimes makes sense to program and test embedded software against an API also found on the workstation. Java programs and programs using some specific database API or a standard middleware like CORBA can sometimes be tested in this manner.
The assumption is that the behavior of these high-level APIs will be sufficiently similar on the host and target systems. This is not necessarily true; for example, the behavior of a Java machine on a mobile phone is not likely to be the same as on a desktop, due to memory and processing power constraints, and different input and output devices.
It is also possible to simulate some aspects of user programs with no model of the target system at all. For example, control models as discussed above are interesting to study regardless of the precise details of the target system software stack.
Using UML and state charts to model pieces of software makes it possible to simulate the software on its own, without considering much of the target at all. Minimal assumptions are made about the properties of the underlying real-time operating system and target in these models.
A common and popular level of simulation is API-level simulation of a target operating system. This is also known as host-compiled simulation, since the embedded code is compiled to run on the host and not on the target system.
This type of simulation is popular since it is conceptually simple and easy to understand, and is something that an embedded developer can basically create on his or her own in incremental steps. It also allows developers to use popular programming and debug tools available in a desktop environment, like Visual Studio, Purify and Valgrind.
It also has several drawbacks. First, it often ends up being quite expensive to maintain the API-level simulation and separate compilation setup required.
Second, the behavior of the simulated system will differ in subtle but important details from the real thing. Details like the precise scheduling of processes, size of memory, behavior of memory protection, availability of global variables, and target compiler peculiarities can make code that runs just fine in simulation break on the real target.
Third, it is impossible to make use of code only available as a target binary. Fourth, complex actions involving the hardware and low-level software, such as rebooting the system, are very hard to represent.
Most embedded operating-system vendors offer some form of host-compiled simulation. For example, WindRiver's VxSim and Enea's OSE Soft Kernel fit in this category. There are also many in-house implementations of API-level simulations, both for in-house and externally sourced operating systems.
The experience with such tools range from the very successful, where minimal debugging needs to be done on the target after the switch-over to the real target, to utter failures where some approximation in the API-level simulation did not work, and the code had to be extensively debugged on the physical target.
Trying to resolve some of the issues with API-level simulation, sometimes the hardware-independent part of the embedded operating system is used, together with a simulation of the hardware-dependent parts.
This is basically para-virtualization, where the device drivers and hardware-abstraction level of an operating system is replaced with simplified code interacting with the host operating system rather than actual hardware.
A paravirtual solution provides better insight into the behavior of the operating system kernel and services, but is still compiled to run on the host PC. It also requires access to the operating-system source code, and a programming effort corresponding to creating a new board-support package (BSP).
Attacking the hardware/software
Finally, the problem can be attacked at the hardware/software interface level. Here, a virtual target system is created which runs the same binary stack (from boot firmware to device drivers and operating system) as the physical target system.
This is achieved by simulating the instruction set of the actual target processor, along with the programming interface of the peripheral devices in the system and their behavior. The technical term for this is full-system simulation, since you are indeed simulating the full system and not just the processor.
The crucial and defining property of this type of simulation is that all software is the same as on the physical system, using the same device drivers and BSP code. It is common practice to develop the device drivers and BSP ports using virtual target systems in cases where the physical hardware is not yet available.
We also avoid the need to maintain a separate build setup and build chain for simulation, since it is the same as used for the physical target. Note that with virtual target systems, target properties like memory- management units, supervisor and hypervisor modes, endianness, word size, and floating-point precision are simulated and visible.
The main drawback of virtual targets is the initial work involved in creating the hardware model. However, practical experience shows that most projects can quite quickly recoup the initial investment. There are tools on the market that help make it easier, less expensive, and more effective to deploy virtual models.
Vendors like ARM, CoWare, Synopsys/Virtio, VaST, and Virtutech offer tools to create virtual systems appropriate for software development. For those who like open-source, the QEMU project offers a range of targets from which to start developing.
The role of Electronic Systems
Full-system simulation is a complex field in itself, with multiple levels of abstraction possible. At one end, there are tools coming from the EDA side aiming to provide some support for early software development for new chips.
This is often called ESL (Electronic Systems Level) design. Unfortunately, the simulation models are often quite slow since they attempt to model the precise cycle-by-cycle execution of the hardware. Execution speeds in the range of a few million instructions per second (MIPS) is the best you can hope for.
For large-scale software development and test, less detail and more speed is needed. To this end, transaction-level modeling and instruction-accurate processor simulators are used.
Such simulation systems provide a complete and correct implementation of the programming view of a system, but approximate the timing. The fastest systems use either just-in-time compilation from target to host or hardware virtualization if the target and host and of the same architecture.
With such techniques, software execution speeds from few hundred MIPS up to several thousand MIPS are possible, basically matching the speed of the physical hardware. This enables the use of virtualized software development, a software development methodology where most of the software work is done on the simulator rather than on physical hardware.
Where do instruction-set simulators
Standalone instruction-set simulators (ISS) are common and established in embedded compiler and debug toolsets. Such simulators usually simulate only the user-level instruction set of a processor, and lets simple programs that do not really do I/O be run on the host.
Modeling of peripheral devices is typically limited to providing sequences of bytes to be read from certain memory locations. Operating systems can not be run on such simulators, since necessary more complex devices like timers and serial ports are missing.
The simulators are also typically quite slow, since there is little value gained from making them faster. IDEs from vendors like IAR and Green Hills are typical examples of development tools including a basic ISS.
As a final note, as embedded systems start using multiple
processors and multicore processors, virtualized software development
is one of the best ways to get a grip on the complex debugging and
diagnostics issues arising from the concurrent execution of multiple
threads of control on multiple processors. Some recent articles on the
topic include "Simulating
and debugging multicore behavior,"."Debugging
realtime multiprocessor systems," and " Dearth
of tools could stall multicore onslaught."
Putting the Pieces Together
So now we know how to simulate an embedded computer and run its software, and how to simulate the environment, user interface, and network in which it operates.
What remains is to tie the various pieces together so that the embedded software can indeed run on the virtual computer board, sense and control the virtual environment, obtain virtual user input, and communicate on the virtual network.
To achieve this, we have provided devices on the virtual computer board corresponding to the interfaces of the physical computer hardware. For the connection to the environment, this typically means modeling the processor-facing programming interface of the analog-to-digital and digital-to-analog converters and digital I/O lines.
The interface models also have an environment-facing side which is connected to the simulation of the environment, in order to read reasonable values and provide actuation data.
In the same way, networks will be connected to the virtual computer boards using a model of some network interface device. The network simulation can connect the virtual system to other virtual systems, to a rest-of-network model, or some other simulated or real system as discussed above.
User interfaces will be represented in the virtual system by a model of the programming interface of a user-interface device like a serial port, LCD controller, keypad interface, LED, or other devices. That device model will in turn connect to a user interface simulator so that the programmer can interact with the system.
Figure 5, below, illustrates a typical setup with a serial port, network, and environment connected to a virtual system simulated in the full-system style.
|Figure 5: Simulation Environment|
It has to be noted that you cannot do all the work in simulation. However, practical experience indicates that you can resolve at least 80 percent, and usually 90 to 95 percent of your software problems in the virtual world.
Even for real-time systems with hard deadlines, most programming problems are functional in nature and amenable to virtualized software development solutions.
Some published data from high-end server development indicates that several months can be saved in debug time on hardware, compared to only starting the debug process once hardware arrives. In the next part in this series I will examine some of simulation solutions and their benefits.
In the end, there is no getting around the fact that you have to test things on the physical hardware eventually, since that is the actual product that will be shipped to customers. The credo of "test what you fly and fly what you test" has to be followed to create a working system. Simulation helps you test what you fly, but it does not really let you fly what you test. A PC would look fairly silly in space.
However, we like to avoid a big-bang approach to hardware and software integration. The more gradual the move to physical hardware, the more effective your overall system and software development approach will be. This realization has led to the development of solutions that allow for the combination of simulated and physical system components.
Control software can be run on a PC in simulation while sending out control signals to the physical system. This requires that the PC has appropriate I/O abilities, usually realized using add-in cards. This creates "hardware-in-the-loop" simulators where the validity of software is tested against physical hardware.
In distributed systems containing many nodes connected on a network, a common approach is to add a new node as a computer simulation to a real test network. This simulation can be run on a PC or on special-purpose simulation hardware such as the dSpace Autobox (special hardware provides better real-time response times).
Inverting this scenario, the embedded software can be run on a real computer while the surrounding environment remains simulated. This allows real-time properties and the correct function of the software-computer combination to be tested, while an environment that cannot be replicated in the lab (such as deep space) is provided by simulation.
For networked systems, you connect a physical network node against a simulation of the rest of the network. The simulations can be run on PCs, but often they require special simulation computer systems providing the required compute power to respond in real-time.
Examples of such a system are the NetHawk and PolyStar network simulators, which use racks of compute and interface boards to provide sufficient computing and I/O ability to feed a real-world telecom node with data at real-time rates.
For machines with complex hands-on user interfaces including steering wheels, multi-dimensional joysticks, buttons, dials, and embedded displays, it often makes sense to simulate the environment and computer system together with a physical expression of the controls.
An extreme example of this are the flight simulators used for training pilots, which create a fairly good impression of real flying. Such control panels can be connected over computer buses like CAN or MIL-STD-1553, or by direct digital and analog connections.
Mixed physical and simulated systems are also very useful when updating an existing system. They make it possible to take an existing physical system, and then add some new network nodes or replace some computer unit in order to test new functionality in the real system context.
In the next and final installment of this series, we will discuss some concrete examples of how simulation has been used to improve embedded software development.
To read Part 1 in this series, go to "Simulating the world."
Next in Part 3, Concrete examples of simulation solutions.
Jakob Engblom works as a business
development manager at Virtutech. He has a PhD in Computer Systems from
Uppsala University, and has worked with programming tools and
simulation tools for embedded and real-time systems since 1997. You can
contact him at firstname.lastname@example.org. For other publications by Jakob
Engblom, see www.engbloms.se/jakob.html.