Simulating Embedded Systems for Software Development: Part 3In this final part of our three-part series on simulating embedded systems, we will cover some examples from the real world, based on our experience with Virtutech Simics in a number of different projects and markets.
Clearly Cutting Time-to-Market:
The development of server computers might seem a strange place to start a list of embedded application examples, but it provides many valuable insights into how software-intense systems can be developed.
While in the embedded world, software has only recently come to dominate development costs and customer value perceptions; this has been true for a long time in the server market.
Also, the difference between firmware on an embedded system and firmware in a server is quite minimal in practice. It is all about initializing hardware, power-on-self-test, and loading code onto a number of main processors to bootstrap the system.
Simics has been used to simulate new server generations for low-level software development and operating-system porting. A key goal was to reduce the time from first hardware availability to first successful power-on and boot. This time has traditionally been on the order of many months or even years, time spent ironing out hardware-software interface misunderstandings, as well as simple bugs in hardware and software.
With simulation, this time has been reduced by three to nine months, depending on the project. The key benefit brought by simulation is the ability to start testing low-level software and the hardware-software interface early on, using a virtual target built from the specification of the hardware design, rather than the physical hardware. The net effect is illustrated in Figure 6, below:
Simulation enables software and hardware development to proceed in parallel (which is a very common motivation for simulation solutions and virtualized software development in general). The simulator also makes it possible to decouple software and hardware development schedules, since the software does not need to wait for working physical hardware to be delivered. The simulator is a simpler deliverable.
What is also interesting is that we have seen the actual software development time to get shorter. The ability of the virtual environment to reproduce bugs and provide a more convenient debug setup saves development time.
Mundane tasks like downloading a new version of a boot code into target FLASH is much faster on a virtual target, it is just a fast copy from disk into the simulator. Less time is spent figuring out if a bug is due to flaky hardware or software problems.
In cases like this, where the hardware is being revised as the software is being developed, the simulator is also developed and delivered in incremental drops to the software team. Even a limited initial virtual target can be used for initial software work.
As time goes on, more parts of the target system are added to the simulation. Changes in the hardware design and specification are communicated in an executable form by updating to the simulator. As a side effect, this improves the communication between hardware and software teams.
Technically, this was done using virtualized software development, targeting 64-bit RISC processors and the devices needed to build a server. Since the code depends on hypervisor and supervisor modes in the processors, and device drivers have to cope with endianness issues introduced by PCI and other interconnects, any other approach to simulation would be limited in value.
Maximum execution speed was essential, since booting an industrial-strength operating system like Solaris, AIX, Linux, or Windows is a matter of many billions of instructions, even for a single-processor version. Simulator configurations range from a single backplan control processor to hundreds of main processors and many gigabytes of simulated memory.
Looking at our overview of simulation from the beginning of this series, the system is a fairly self-contained computer. There is no actual controlled environment, and the user-interface is limited to serial consoles or telnet sessions for most work. The main stimulus comes from the software loaded into memory and on disks, and from the system configuration, rather than from external sources.
Networks of Networks: Telecom
Telecom nodes in the core cellular and fixed telephony networks of the world are among the most complex embedded systems in existence. A typical system is rack-based, and the basic building block is the board.
Each board contains one or more processors, usually some application-specific hardware, and a software stack based on a real-time operating system. Several boards are combined into a rack, and several racks form a single system.
Each rack has its own backplane network, and then extension boards connect each rack to the other racks in the system. The backplane network is usually Ethernet, ATM, or PCI/PCI Express. Since these systems are mission-critical, all boards and all networks are at least dual-redundant. We thus see a hierarchical system containing a large number of networks, with each level of hierarchy actually being a pair of redundant networks.
Simics has been used to virtualize several such systems, and they are, according to our knowledge, the most complex embedded systems ever simulated. Virtual systems containing tens of processors are used daily in these virtual systems, and configurations containing hundreds of target processors are not uncommon.
As shown in Figure 7, below, the virtual system setup for a generic telecom system containing two racks (we leave out the dual redundancy of the networks to make the picture easier to read):
To increase simulation speed, not all boards are fully virtual running the complete software stack. Some are stubs where the boards are simulated at their interface to the backplane network.
The choice of full virtualization versus stub simulation depends on the usage scenarios for the virtual system. For example, digital signal processing boards need only provide some simple signals to the control software for most test cases. This makes it possible to collapse a large farm of powerful DSPs into a single very small and fast model.
In other cases, the DSP boards are fully virtualized to provide a test bed for the actual DSP software and its interaction with the control software component. The extension boards and backplane switch boards are also usually stubbed, since their functionality is really subsumed by a typical packet-level network simulation.
In order to provide data to process, the virtual system also has network connections to external data sources. Usually, virtual nodes are connected to physical test systems just like physical nodes, and these physical test systems provide both rest-of-network and packet generation-style network data.
The "environment" for such systems is thus the test equipment providing rest-of-network simulation of the phone network. An interesting aspect here is the timing between the virtual and physical systems. As the virtualized system provides features like total system pause and also exhibits a load-dependent execution speed, it will run at a varying pace compared to a physical counterpart.
Sometimes it will be slower, and sometimes faster (since the simulator efficiently compresses idle time, it is easy to run many times faster than the real world for a lightly loaded system). This means that the physical test system has to either listen to the time exported from the virtual system, or allow some slack in deadlines to avoid false negatives in tests.
The user interface of the system is just serial consoles. Theoretically, since each board provides two or more serial connections, there can be quite a few available at any one time. In a real system, most of these are not really physically accessible, but in the simulation, it is conveniently possible to talk to any serial console on any board.
Other benefits generated by the simulator is reduced setup and configuration times (writing a script is much faster than connecting physical boards and cables), and superior insight into how the system works. In the virtual environment, it is possible to inspect the complete state of all boards in the system at a single synchronized instance in time.
Simulating telecom systems has proven that virtualized software development technology can handle even the largest and most complex embedded computer systems.
One type of such system that we model in Simics contains models of more than ten different types of PowerPC, PowerQUICC, and DSP processors, and more than thirty different types of boards. Some of the devices have memory maps containing thousands of registers.
There are standard ASSPs like Freescale PowerQUICC II and III chips, custom ASICs, discrete Ethernet controllers, and FPGAs. Operating systems like Linux, VxWorks, OSE, and several in-house operating systems have all been run on the virtual hardware, including mixed simulations containing with different operating systems running on different boards.
Military Systems: Focus on Mechanics
A very different example from the above two comes from the field of military systems. In this case, the system developers already had a large model in place of the mechanics of the target system developed in MatLab/Simulink and other tools. Some software was being developed using this simulator, by interfacing code directly to the mechanical system model (which is a standard feature of MatLab).
No real operating system or other component of the software stack was involved, just the user code algorithms. This limited the validation value of the code tests, since the code was not really compiled with the right compilers on run on the actual type of processor found in the target system.
In order to take testing in simulation further and reduce the dependency on physical development boards and physical prototypes, full virtual nodes were integrated into the existing simulation. This was achieved using a special-purpose simulation middleware package that eased the integration of simulators of various makes and styles. Using such a middleware has some impact on performance and simulation system complexity, but it was already in place as the integration hub of the system.
The middleware made it possible to run various parts of the simulation on separate host computers to increase overall simulation speed. Note that the simulation speed-up from such a distributed simulation solution is limited by the amount of synchronization necessary to maintain a coherent view of the simulated system state.
Inside the virtual target computer boards, device models for analog/digital converters connect to the middleware to obtain environment data and provide actuator data. Fully virtual target machines are used to test the actual code being used in the physical target. Among other uses, this makes it possible to collect code coverage statistics without using instrumented code. The simulated solution provides the insight needed to collect traces of execution, without any change to the target code.
The most notable features of this solution are the varying level of simulation of the compute nodes, and the large investment already in place in the mechanical model.
The user interface and networking aspects are basically nonexistent, since the computer boards are used to control the mechanical system and not to interact with users. Simulation was an established methodology in the project, and the simulation system was extended to cover also the computer part of the system.
Custom and Standard Computer Parts
It is worth noting that simulation of the computer part of an embedded system is useful regardless of the degree of custom hardware used. While much tool attention is focused on designers and users of custom system-on-chip devices, simulation solutions and virtual hardware platforms are just as useful for systems built using standard parts.
The value of a virtual computer board and simulated system to the software developer does not really depend on whether the processor chip is designed in-house or not, it comes from the use of simulation and virtual systems as a methodology to make software development faster and better.
Even when hardware is readily available or even in legacy state, simulation solutions bring benefits for debugging and testing that physical hardware cannot match.
Simulation is a very powerful technique for engineering embedded systems in general, and developing the software component in particular. Depending on where the risk and expense lies in system development, it typically makes sense to build simulators for one or more system parts.
Experience indicates that using simulation makes it possible to develop more robust embedded systems faster, letting hardware and software development schedules overlap and providing better debug and testing facilities for software.
There are many robust software packages available in the market today that help you construct system simulations, and with powerful PCs available very cheaply, there is no good reason not to investigate simulation for your current or next project.Jakob Engblom works as a business development manager at Virtutech. He has a PhD in Computer Systems from Uppsala University, and has worked with programming tools and simulation tools for embedded and real-time systems since 1997. You can contact him at firstname.lastname@example.org. For other publications by Jakob Engblom, see www.engbloms.se/jakob.html.