Simulation – better than the real thing? -

Simulation – better than the real thing?


With a complex embedded system, work needs to start on the software long before the hardware is available. Indeed, it may be necessary to begin even before the hardware design is finalized. Because software engineers need to test their code as they go along, they need an execution environment from day 1. Numerous options are reviewed in this article, with a particular focus on simulation technologies.

There has always been tension between hardware and software developers. In an embedded design, if something goes wrong both parties tend to assume that the other is at fault. Worse still, if a hardware design flaw is located late in the development process, it may be too late to fix it economically, so the only option is to address the problem in software. As a software engineer, my, does that rankle!

A result of this tension is an attitude among some embedded software developers that hardware is a necessary evil that exists solely to execute the software. So, any means to eliminated the hardware from the software development process is attractive, which leads to the conclusion that simulation is a Good Thing.

What is simulation?
Broadly, a simulator for embedded software development is an environment of that enables software to be run without having the final hardware available. There are a number of ways in which this may be accomplished:

  • Logic simulation – The hardware logic is simulated at the lowest level – the logic gates themselves. This is ideal for developing the hardware design, as detailed behavior of the system can be verified, including complex timing relationships, which can lead to hard-to-find bugs. Modeling a complete embedded system and executing code, though theoretically possible, would be painfully slow. It is also quite likely that a suitable model of the CPU will not be available.
  • Hardware/Software Co-simulation – Using an instruction set simulator (ISS – see below), linked to a logic simulator (see above), a compromise performance may be achieved. This makes complete sense as, typically, the CPU design is fully proven, so having a gate-accurate model is overkill and the much greater performance of an ISS is very welcome. Of course, such an environment does offer slightly less timing precision.
  • Instruction Set Simulation – An ISS reads the code, instruction by instruction, and simulates the execution on the chosen CPU. It is somewhat slower than real time, but very precise and can give useful execution time estimates. Typically the CPU simulator is surrounded by functional models of peripheral devices. This can be an excellent approach for functional debugging low-level code like drivers. An ISS provides a non-intrusive but transparent way to execute and debug code.
  • Host (Native) Code Execution – Running the code on the host (desktop) computer delivers the maximum performance – often exceeding that of the real target hardware. For it to be effective, the environment must offer functional models of peripherals and relevant integration with the host computer’s peripherals (like networking, serial I/O, USB, etc.). For larger applications, this approach enables considerable progress to be made prior to hardware availability and offers an economic solution for larger, distributed teams. The downside of this approach is that executing code on the (normally) X86 CPU architecture of a desktop computer has different timing characteristics compared with the target embedded CPU, which maybe ARM, PowerPC, ColdFire, etc.

Comparing simulation technologies
Although there are a varietyof approaches to simulation from which an embedded software developermay choose, they are not competing technologies. Each one has itsstrengths and weaknesses, benefits and downsides. Broadly speaking,there is a tradeoff between speed of simulated code execution and thedegree of timing accuracy that the simulation delivers (Figure 1) .

There is a reasonable straight line correlation when these parameters are compared in this way.

Breaking the Rules
Anobvious question is whether there is a technology that falls outside ofthis plot. Is there a way to get higher precision in the time domain,while also simulating code execution at a reasonable speed? The answeris yes, but it comes at a cost. A number of vendors sell the technologyto perform logic simulation at much higher speeds. This is done by usingdedicated hardware that can be programmed to behave like the logic tobe simulated. Such emulators do not come cheap, but they do offer aflexible means to debug code in a time-accurate fashion before realhardware is available or even has its design finalized.

Becauseevery embedded system’s hardware is different, getting code to runreliably on such a new platform is challenging. Although it may seemdesirable to get hold of target hardware as soon as possible, that maybe a problem. Simulation technologies allow progress to be made withsoftware development earlier, before the hardware is available, and theyalso can be useful after hardware has been completed, as the executionenvironment is uniquely controllable and transparent.

Colin Walls hasover thirty years experience in the electronics industry, largelydedicated to embedded software. A frequent presenter at conferences andseminars and author of numerous technical articles and two books onembedded software, Colin is an embedded software technologist withMentor Embedded (the Mentor Graphics Embedded Software Division), and isbased in the UK.
His regular blog is located at . He may be reached by email at .

2 thoughts on “Simulation – better than the real thing?

  1. IMHO the best thing about software simulation is that you are not tied to real time. You can speed up the clock or slow it down.

    Slowing things down is handy. You can single step debug or add other logging etc that you could not do in a real system. For e

    Log in to Reply

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.