How to make virtual prototyping better than designing with hardware: Part 1 -

How to make virtual prototyping better than designing with hardware: Part 1

Engineers embrace model-based design in many different disciplines associated with product development, for example, finite element analysis in mechanical engineering and circuit simulation for electrical engineering.

Modeling enables development before physical prototypes are available. It enables development that is not possible, or is very difficult, with the physical or actual product. Virtual prototyping of embedded hardware brings the model-based design paradigm to embedded system development.

The use of virtual prototypes prior to hardware delivery has well-documented benefits for architectural exploration, early software development, golden reference specifications, reduced silicon turns, and software/hardware co-verification. [5] This article focuses on the virtual prototype benefits after physical prototype availability. The Google Android Emulator is a well known example of how a VP delivers value even after silicon is available [3].

A successful virtual prototype (VP) of Electronic Control Units (ECU) has five key characteristics:

1) Provides for simultaneous verification of hardware and software (co-verification)

2) Consists of behavioral models of the CPU, peripherals and Application Specific Integrated Circuits (ASICs) that provide bench look and feel for the software programmer

3) Loads and executes the same executable image as the physical ECU

4) Executes target firmware no less than one order of magnitude slower than the physical hardware (20-200 million instructions per second), fast enough for software development.

5) May be aggregated into larger super-simulations of multiple ECUs and include sensor and actuator plant models

The increasing complexity in embedded software development, such as multiple cores and integration of external IP, conspire against the best software designers and push requirements for better verification tools in the implementation phases of software design and coding.

In addition to the increased functional complexity, automotive ECUs are now multi-sensor/actuator systems with network protocols that encompass entire vehicles or even multiple vehicles. Systems of systems, decreased package size, and hermetically sealed packages all reduce (or even eliminate) the visibility of the firmware.

In automotive, and many other industries, feature sets are now growing beyond one dedicated controller or subsystem and into distributed subsystems. VPs, when combined with models of devices under control (often called plant models), open the door for developers and verification engineers to work with the product in the context of the “full” system.

While there will always be the requirement to verify the performance of an actual system with the physical product, a virtual environment enables testing that may be very difficult or impossible to do in a repeatable manner.

Take for example someone who is developing an adaptive cruise control for a vehicle fleet. While the base target firmware can likely be brought-up in isolation, complete testing of the controller requires interaction with many vehicle mechanical and electrical systems.

Testing such systems with only a physical prototype can lead to very costly, and often dangerous, test procedures. Virtual environments of full vehicles allow developers to conduct tests and experiments that would be costly and dangerous on the real vehicle even after the hardware ECU is available.

A common misconception is that once the physical hardware is available all software development should switch to bench development and no longer use the VP. A more efficient and economical method uses the VP benefits of visibility, controllability, availability, repeatability and testability even when suitable bench hardware is available.

At the management and user level we also discuss the acceptability of VPs for software, systems, and verification engineering development environments. This paper makes the case that the VP is much more than a pre-hardware tool. The VP increases productivity and provides essential debug and test capabilities that complement hardware benches.

Giving developers more visibility
VPs provide many levels of visibility to the engineer. VPs can record signal changes, or even internal RAM modifications in a VCD (Value Change Dump) dump format, or other file output. VCD is an ASCII-based format commonly used by EDA logic simulation tools and may be viewed by various programs including GtkWave and SimVision.

Figure 1 below shows an example output of the viewer and illustrates a communication sequence between the Microcontroller Unit (MCU) and RF receiver IC in an automotive body controller.

Figure 1. An example output of the waveform viewer illustrating the communication sequence between the automotive body controller microcontroller unit and radio frequency receiver integrated circuit.

The protocol consists of reset markers, an opcode transmission of 0x0D7A (twice) followed by data transmission of 0x01C8 (also twice). Figure 2 below shows a detailed view of the entire communication sequence with a zoom-in on the five sections of the communication sequence.

Similar to the use of a storage oscilloscope, the VCD waveform allows easy comparison of the entire communication sequence. Because the simulation is deterministic, repeated captures are identical to previous ones allowing easy comparison of a small virtual hardware or algorithm change.

As shown in Figure 2 below , confirmation of the critical timing parameters and the addition of annotations are possible. VPs, even after hardware is available, open up this level of debugging without complicated and costly bench tools.

Figure 2. Detailed view of the radio frequency receiver to microcontroller communication sequence with a zoom-in on the five sections of the communication sequence.

To view an expanded image, click here

The large benefit of VCD traces is that simulation visibility also extends to the inner workings of the MCU, for example, the measurement of interrupt latency.

On the bench, it is often difficult, if not impossible, to measure the time from interrupt request assertion to the beginning of the actual software interrupt service routine. The inner workings of each ASIC (or SOC or FPGA) model are also created with special registers that provide visibility of the inner functionality.

A special register may be read-only (allowing visibility) or read/write (allowing visibility and control). For example, an accelerometer model may provide a writable register called “acceleration” that represents the applied force in units of gravity (Gs).

The test bench, via a GUI or automated scripts, will modify the acceleration register during simulation to represent vehicle deceleration. In another example, a power supply model uses special registers to display the contents of internal registers, computer operating properly status, as well as visual indicators of the output voltages of the ASIC.

While tracing of external data is possible with various test tools on the bench, the VP allows the software engineer access to the internal states of the hardware as well. The VP allows the engineer to quickly and easily access traces of internal signals and software events that are not visible on the bench and focus debug effort directly on the problem.

Communication messages are often abstracted to a transaction level that sends the entire message as a packet between models, primarily to increase simulation speed. Common automotive message formats such as Controller Area Network (CAN), Local Interconnect Network (LIN), Inter-Integrated Circuit (I2C) and Synchronous Peripheral Interface (SPI) protocols can all be expressed as transactions.

The transaction protocol retains transmission timing information. The higher level of abstraction increases simulation speeds and avoids implementation of shift registers and transmission of unnecessary information. In addition, it also provides an easier level of viewing of the data by the software engineer.

For example, instead of trying to decipher individual bits of a CAN message the software developer views the entire message as one piece of data. The underlying models of the MCU CAN controller correctly arbitrate the messages and emulate the bit by bit physical hardware layer.

As part of the abstraction, the VP may be commanded to display the communication messages between the MCU and the various ASICs, or to display the signal communications between the ECU and the user interface.

VPs include bus monitor instrumentation to record the SPI transfers when a chip select is active. For example, at the trailing edge of a chip select, the bus monitor buffer is output with a simple print statement resulting in a display such as:

MCU SPI to EE: 0x03 0x00 0x3C 0x00 0x00
EE SPI to MCU: Hi-Z Hi-Z Hi-Z 0x9F 0xFF

Intra-model checking requires advanced planning. As described previously, the communication interface for SPI is transaction based and is designed to communicate the phase and polarity from the master to the slave device.

Phase and polarity describe if the synchronous clock idles high or low, and also which edge shifts/samples the data. Improper phase/polarity SPI communications may work on the bench due to bus capacitance and hold times, but may fail when exposed to temperature, or part-to-part differences.

Because of the checking built into the slave ASIC devices, the virtual prototype has detected (and prevented release of) improper phase and polarity hardware/software configurations.

Debugging and Software Analysis
In any control-based embedded system one debugging difficulty is the ability to control the stepping of the firmware and the physical device under test.

Consider an engine controller and engine (or dynamic bench simulator that models the engine signals). During bench debugging, a breakpoint in an interrupt routine forces one shot debugging due to the rest of hardware bench not being able to pause.

The internal combustion engine continues rotating at 6000 RPM; hence, the user gets only one meaningful opportunity to look at the data structures in the interrupt routine. The interrupt routine then causes several pending interrupts to be executed in priority order, but not necessarily in sequence of the real-time events of the engine.

In a VP, one can synchronously pause and restart the engine controller model and engine plant model together. The stimuli, waveforms, and sensor models are constructed to freeze when a debugger pause is issued and just as easily resume when the simulation continues.

The example can be extended to almost every automotive ECU product area; In air bag systems, the “crash waveform” can be paused and resumed. This useful VP debug environment does not go away once the physical hardware is available; in fact the usefulness increases as the VP provides a powerful environment to work through issues that are found on the physical bench.

Multi-core debugging further accentuates the need for virtual prototypes as the parallel machines can be stopped in synchronization and all cores are visible. [2] Gathering of data on the bench typically only occurs if there is a specific need, while gathering of data in a VP environment can occur for every event on every simulation run.

The number of physical prototypes, scope probes, triggers, and breakpoints limit bench development. ICE (In-Circuit Emulation) and BDM (Background Debug Mode) debug is typically limited to a small (2-4) number of hardware breakpoints at the same time. In contrast, the VP essentially has an unlimited number of simulation breakpoints and is only restricted by disk space and simulation time.

Figure 3 below illustrates the data collection available in a VP using a streaming trace interface of the simulation and a post-analysis program to gather statistics of the interrupt request durations. In this example plot we learn that the average interrupt latency is 10.523 microseconds.

Figure 3. Interrupt service routine duration can be gathered from the virtual prototype using a simulation streaming trace interface and a post-analysis program.

To view an expanded image, click here

Figure 4 below shows a similar graph, but displays data on the MCU interrupt lock duration. Real time periods where the interrupts are inhibited are critical to understand and this figure shows that the maximum lock duration for this ECU is 78.5 milliseconds.

Further analysis shows that this rather long time occurs at the ECU startup, and the maximum lock duration (excluding startup) is 218 μs and the average lock duration is 3 μs.

Figure 4. Microcontroller interrupt service lock duration can also be gathered from the virtual prototype using a streaming trace interface and gnuplot.

To view an expanded image, click here

In several projects the authors find that software engineers prefer VP use in late stages of the development cycle because it makes debugging the software much easier. By bypassing lengthy bench re-flash procedures, the software edit-test cycle can be much faster.

When bench problems appear un-diagnosable the software engineers return to the VP for deep investigation into the internal signals of the MCU. The visibility of signals, software analysis capabilities and convenience of the tools results in software with fewer defects.

Controling simulation stimuli
The careful control of simulation stimuli can expose faulty implementation of a system level requirement. For example, consider the following requirement (Figure 5 below ) of a state machine for de-bouncing a switch input.

Figure 5. The originally specified state machine requirement was inadequate

The switch input requirement for this simple, timer supervised implementation is to recognize the switch after it is in the on position for more than 78us. Any zero sample of the switch is to revert to the off state.

Adding a Perl script to the simulation allowed the switch input to be delayed by 0.5us in each simulation run. Testing quickly exposed two state machine implementation issues as illustrated by the modified state machine in Figure 6 below .

Figure 6. Testing revealed that the state machine requirement did not properly account for the switch remaining on, or that the timer needed to be reset.

Introducing the switch change at or near the time of the timer expiration uncovered firmware timing race conditions. This type of precisely controlled stimuli is often difficult to accomplish on the hardware bench because the test stimuli are fed asynchronously to the debugger which may be paused or single stepping.

A scriptable Graphical User Interface (GUI) made available to all software engineers is easy to modify and encourages controlled testing of the ECU at all stages of the design cycle.

The aspect of controllability is a model design requirement. A technique known as “fault injection” allows the end-user to induce conditions that would normally require custom hardware variants. For example, an output driver ASIC device may have physical circuits to detect fault conditions such as “short to ground”, “open output” or “thermal overload”.

On a physical bench, creation of the “short to ground” and “open output” faults require extensive but feasible switch-able loads attached to the device. The “thermal overload” condition is difficult to implement and test on the physical bench. Contrast the required instrumentation for the physical device to the models:

In simulation, all of the fault conditions are modeled with simple GUI checkbox parameters. The checkbox input causes the model to respond with an appropriate output register or port change emulating the induced fault. An MCU read of the register or port causes the firmware to act as if the fault had occurred and allows full checking of the diagnostic routines of the ECU.

The controllability of the VP is superior to the bench because of the direct tie between the ECU and plant models, and because the models are built with additional “fault injection” registers that can direct the model state.

Availability of the development environment
The virtual-ness of the virtual prototype allows for greater availability of the development environment for engineers working in a global and even local environment. Multiple geographical locations of the project team are very common in most major corporations.

The geographic dispersion is problematic for embedded development because of the complexity and size of each hardware development bench. Embedded development simply requires more setup and care than traditional server/desktop software.

Physical bench embedded software development requires a hardware board (only a few may exist), power supplies, oscilloscopes, voltage and current meters, connections for debuggers, and static and dynamic test panels to provide stimuli. The hardware bench setup is often very costly and not portable as illustrated by Figure 7 below .

Figure 7. This physical bench shows a hardware board, power supplies, debugger, measurement equipment and dynamic test panels that together is expensive and not portable.

Project costs can escalate to produce and ship additional test panels to the global development sites. A typical physical development bench cost ranges from $30,000 to $150,000. Likewise, the limited nature of prototypes makes it impractical to send the prototypes to many different locations in the early development stages of a project.

As a result of equipment, shipping costs, and customs delays, access to the hardware development bench in some global development sites is often very limited which in turn limits the amount of development that a software engineer can accomplish on the actual hardware.

In contrast, VPs make the entire test bench just another piece of software. This allows worldwide development teams to quickly begin creating target firmware rather than trying to replicate, or share, a physical bench system.

For example, consider the virtual test bench shown in Figure 8 below . There is little cost incurred, with the exception of tool licensing, in replicating and deploying a virtual test bench to software developers in any global location once the initial development is completed.

Even after the hardware test panel has been produced, capturing the system and test bench in a virtual environment enables higher productivity and a better use of engineering resources.

Figure 8. The virtual test bench is easily deployed to anywhere in the world, and is easily modified as requirements change.

To view an expanded image, click here

Changes to a VP test bench are also easier than changes to a hardware bench. Consider again the pictures from the test panel (Figure 7 earlier) and the VP test bench (Figure 8 above). As a product evolves there may be requirements for additional features or capabilities.

New features or capabilities will need new test inputs or output indicators. Changes to the physical system require time and material cost for each system, while virtual bench changes are made once and deployed as a software update to all users.

Availability of the VP is better than physical hardware due to the reduced costs of replication, ability to distribute to a worldwide team, and the flexibility to change.

Repeating design simulations
Repeatability allows the simulation to react in the same manner each run; the determinism allows the “change under test” to be easily seen. While most good embedded software projects use unit and regression testing, running the tests is often a fairly labor intensive process. Tight resource levels (people and equipment) result in sporadic testing late in the development process, when errors are most costly.

In a physical bench environment, achieving repeatability requires elaborate tool interconnections to power on/off the system, program test panel switches, control pulse width modulation (PWM) signals, monitor analog outputs, and provide run-time control of the hardware unit. The product's analog output and PWM signals must also be instrumented in a way to allow for data capture and logging.

Typical interconnections require tools like LabView, various real-time cards, multiple computers, and software debuggers and analyzers. Closed loop control of the physical prototype often adds tools like Matlab for plant models tied in with the real-time driver and capture cards. For each product, and possibly for each test, unique harness connections may be required.

Product flash memory may need to re-programmed, which is often a time consuming step. The complex setup for automation of each physical bench often leads to possessive-ness of the bench only one person is allowed per automated test bench, in fear that a moved wire or pieces of equipment render the entire bench in-operable.

Next in Part 2 : The importance of testability in virtual prototyping.

1. Adaptation of a Virtual Prototype for Systems and Verification Engineering Development, Chandrashekar, M.S.; B.C., Manjunth; Lumpkin, Everett; Winters, Frank; SAE Convergence 2008 paper 2008-21-0043

2. “Debugging Multiprocessor Code”, Engblom, Jakob, 07/21/2008 EE Times,

3. Fast Virtual Prototyping for Early Software Design & Verification, Garg, Amit, Dec-2006, IP-based SoC Design Conference.

4. Hardware-dependent Software Principles and Practice, Ecker, Wolfgang and Rainer D?mer; Springer, 2009. p. 28

5. “Hardware Virtualization for Pre-Silicon Software Development in Automotive Electronics”, Schirrmeister, Fran, Filip Theon, SAE 2008 paper 09AE-0314

6. “System Prototypes: Virtual, Hardware or Hybrid?” Summary of DAC 2009 panel discussion on CoWare sponsored blog site,

7. Virtual Platforms for Software Development — Adapting to the Changing Face of Software Development, Serughetti, Marc, presented Nov-2005,

8. Virtual prototyping benefits in safety-critical automotive systems, Alford, Casey, Hanser Automotive, March/April 2006

Everett Lumpkin is Senior Function Design Methodology and Automation Engineer with Delphi Corp. He has 20 years experience in microcontroller development, microcontroller simulation, embedded software and independent test/verification. Since 2002 he has been the technical team lead providing virtual prototypes for automotive safety systems, powertrain, power electronics, and body computers. Everett holds a BS in Computer and Electrical Engineering from Purdue University.

Casey Alford is the Director, Field Engineering & Technical Services with Embedded Systems Technology. Casey has been creating virtual prototypes for 5 years (currently at Embedded Systems Technology and previously at VaST Systems Technology). Prior experiences include embedded software engineering, serial network drivers, and network protocol tools used widely in the automotive industry. Casey holds a BSE from the University of Michigan in Computer Engineering.

The authors wish to thank the follow people for their significant content contributions: Graham Hellestrand, Frank Winters, Patricia Hughes and Jakub Mazur.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.