Advertisement

Simulation – better than the real thing?

July 06, 2013

Colin Walls-July 06, 2013

With a complex embedded system, work needs to start on the software long before the hardware is available. Indeed, it may be necessary to begin even before the hardware design is finalized. Because software engineers need to test their code as they go along, they need an execution environment from day 1. Numerous options are reviewed in this article, with a particular focus on simulation technologies.

There has always been tension between hardware and software developers. In an embedded design, if something goes wrong both parties tend to assume that the other is at fault. Worse still, if a hardware design flaw is located late in the development process, it may be too late to fix it economically, so the only option is to address the problem in software. As a software engineer, my, does that rankle!

A result of this tension is an attitude among some embedded software developers that hardware is a necessary evil that exists solely to execute the software. So, any means to eliminated the hardware from the software development process is attractive, which leads to the conclusion that simulation is a Good Thing.

What is simulation?
Broadly, a simulator for embedded software development is an environment of that enables software to be run without having the final hardware available. There are a number of ways in which this may be accomplished:

  • Logic simulation – The hardware logic is simulated at the lowest level – the logic gates themselves. This is ideal for developing the hardware design, as detailed behavior of the system can be verified, including complex timing relationships, which can lead to hard-to-find bugs. Modeling a complete embedded system and executing code, though theoretically possible, would be painfully slow. It is also quite likely that a suitable model of the CPU will not be available.
  • Hardware/Software Co-simulation – Using an instruction set simulator (ISS – see below), linked to a logic simulator (see above), a compromise performance may be achieved. This makes complete sense as, typically, the CPU design is fully proven, so having a gate-accurate model is overkill and the much greater performance of an ISS is very welcome. Of course, such an environment does offer slightly less timing precision.
  • Instruction Set Simulation – An ISS reads the code, instruction by instruction, and simulates the execution on the chosen CPU. It is somewhat slower than real time, but very precise and can give useful execution time estimates. Typically the CPU simulator is surrounded by functional models of peripheral devices. This can be an excellent approach for functional debugging low-level code like drivers. An ISS provides a non-intrusive but transparent way to execute and debug code.
  • Host (Native) Code Execution – Running the code on the host (desktop) computer delivers the maximum performance – often exceeding that of the real target hardware. For it to be effective, the environment must offer functional models of peripherals and relevant integration with the host computer’s peripherals (like networking, serial I/O, USB, etc.). For larger applications, this approach enables considerable progress to be made prior to hardware availability and offers an economic solution for larger, distributed teams. The downside of this approach is that executing code on the (normally) X86 CPU architecture of a desktop computer has different timing characteristics compared with the target embedded CPU, which maybe ARM, PowerPC, ColdFire, etc.

< Previous
Page 1 of 2
Next >

Loading comments...