Virtual prototypes are much more conducive to design automation because the environmental stimulus is just another component of the simulation. Careful control of the stimuli results in deterministic behavior of the product simulation.
Sometimes tool integrations are still required (for example a debugger, Matlab or LabView), but complexity is reduced to the exchange of information via control APIs and configuration files rather than cables, networks, and manual setup of test panels.
Users of VP prototypes are able to automate the testing process via software scripts eliminating the need for custom equipment and cables. Flash memory is simply loaded with a program image rather than re-flashed with complicated tools on the real hardware. Finally, the virtual prototype is truly time share-able as each user can have their custom configuration without fear of breaking anothers test setup.
Running these tests nightly allows for much faster feedback of code changes. For one particular airbag controller project, the engineering team implemented a daily execution of 14 scripts on two different ECU configurations. Developers were required to supplement those tests for each new or changed ECU function with a self-test that exercised the functionality.
Test examples include accelerometer and input sensor data filtering, deployment loop configuration and activation, event recording, serial EEPROM read/write, and manufacturing diagnostics. The productivity of the team was accelerated because each team member could run tests prior to and after a firmware change and did not need to be an expert in all areas of the ECU.
The hardware and software aspects of VP make this possible. Even though physical prototypes were available there was no easy to way automate running test cases on them.
Assigning a resource to continuously run the physical tests each day would have pulled away a resource that could be developing the product; however, running the VP test scripts nightly did not require any additional resources beyond setting up the test environment.
Increasing product complexity demands continuous unit and regression testing. Virtual Prototypes facilitate this repeatability testing due to close integration of the environmental stimulus and deterministic behavior.
Virtual prototype testability
Testability refers to the capability of the virtual prototype to self-check the firmware; and to allow testing where use of physical prototypes may be unsafe or inaccessible. Testing and development of a product requires putting the product through possible failure modes.
For some products, such as airbag deployment, testing normal operations in full context may require destruction of the end product. Just as model-based design has been used in other disciplines to test systems that are unsafe, or cost prohibitive, VPs allow this type of testing to be applied to embedded control systems.
In these cases, VPs can be particularly valuable to engineers even if real hardware is available. Using models to test destructive or dangerous conditions is not new to engineering.
In vehicle crash tests, for example, it used to be very common to have to crash many prototypes of the same vehicle to see how it might perform. Today, engineers perform initial and repetitive crash tests virtually without the destruction of an actual vehicle by using injected acceleration waveforms.
VPs bring this usefulness to embedded design, both in the early design phases and also late in the development stage where hardware and software have to be put through various potential failure modes.
Allowing engineers and verification personnel to run experiments in the virtual environment before committing to the physical product saves money and increases safety. Developers can then be assured the design will work prior committing to a real physical product.
Much more system insight can be gained by an engineer who is able to develop the target product in the context that it will be used. Take for example the graphic shown in Figure 9 below that shows the virtual world for an adaptive cruise control system. Access to the physical controller hardware doesn't make the development process easier.
|Figure 9. The multi-vehicle adaptive cruise control system simulation allows validation in a multiple (virtual) vehicle context, and avoids expensive and costly design iterations.|
Due to the size, cost, and complexity of the entire system (in this case several vehicles) most of the physical hardware firmware development takes place in isolation from the rest of the systems using a pre-generated set of test vectors. While the engineer knows if changes to the code will pass the pre-set test cases, it is difficult to assess the change on the whole system.
In the virtual world, this type of in context development can be achieved much cheaper and safer. Once the design is refined and tested, it can simply be validated in the multiple (virtual) vehicle context, and avoid expensive and costly design iterations.
The only cost effective path forward for this level of development is virtual prototypes and the engineering value of the VP continues to be high even after the hardware becomes available. Simply put, in this case, the development with physical prototypes is impractical and a multiple vehicle grid simulation of virtual prototypes is capable of validating the design.
The second aspect of testability is the use of extensive warnings which monitor the execution of the firmware running across the MCU and ASIC simulation models. For example, an internal EEPROM/Flash peripheral model knows when it is in the write state and thus an intervening read of the same EEPROM/Flash block returns invalid data.
Internal to physical EEPROM/Flash devices, high voltages supply the internal memory cells during write sequences. During the several millisecond write state, data reads are not electrically possible.
The physical part offers no indication of the illegal read during a write condition, and incorrectly written firmware may compute critical algorithms using invalid data. In the VP, the model of the EEPROM/Flash memory is constructed to output a message under this invalid condition:
WARNING: EEPROM1 data read from address 0xXXXX while in the write state.
This message alerts the software engineer of the specification violation. This system problem due to the specification violation could easily remain latent in normal product development. The failure mode due to invalid data read often exhibits in an obscure way which makes debugging of the issue very difficult and time consuming.
The number of warnings provided in a peripheral or ASIC simulation varies with the complexity of the model. Typically 2 to 20 warning messages are included in each microcontroller peripheral or external ASIC model.
Many of these warnings monitor subtle requirements and caution statements in the specification. The software developer thus benefits as if an omnipotent field application engineer for the device were watching the code development.
The simulation essentially contains a FAE in the box which constantly checks the software execution for correct interaction with the hardware design. Without the aid of the warnings, the author may unknowingly violate requirements, potentially resulting in a quality or field return issue.
ASIC self-checks and have been known to detect split SPI (serial transmission) device transfer, a discrete pin left in an improper state to for device communication, or conflicts between two active drivers on the same schematic signal.
VPs frequently catch unique firmware programming errors that escape code reviews, static analysis tools or even bench debugging. This makes a virtual prototype a good complement to (and not a competitor with) existing verification techniques.
The errors found are often complex, subtle driver-related initialization order or MCU specification violations. Several studies on software dependability report fault densities of 2 to 75 bugs per 1000 lines of executable code. Drivers, which typically comprise 70% of the operating system code, have a reported error rate that is 3 to 7 times higher. 
Even basic software created by the MCU suppliers, compliant to Automotive Open System Architecture (AUTOSAR), has been found to improperly use the MCU hardware when exercised on the virtual prototype.
VPs detect hardware driver initialization order issues such as; turning on a timer prior to proper initialization, power off of a UART or SPI while still receiving data, or the switch of a timer clock input frequency while the timer is in active use. An example warning for the timer clock input frequency error warns:
WatchTimer: Clock frequency of the watch timer is modified while the WTM register bits 1 and 0 are set, resulting in possible loss of timer accuracy.
ASIC models also embed warning messages. Consider an output driver ASIC that provides for a load current measurement, but while the ASIC is in that measurement state, a die temperature rise is expected.
In this particular case, the thermal design of the package only allowed for short duration current measurements on a single output driver at a one time. The model for this particular ASIC is built in a way that if the duration is exceeded or the number of measured channels is exceeded, a warning message is displayed to the software developer.
Assumptions about how the IC is utilized in the system are eliminated under this scenario, resulting in adherence to the specification throughout the development process.
Fine tuning of the warning and error messages in the VP is common. Just like static analysis tools, too many or too frequent messages cause the software developer to de-sensitize and miss underlying problems.
However, when these warning messages are available and carefully reviewed, it provides a level of checking that is not possible on the bench or with static checkers.
Warnings for environmental conditions (such as modeling a long delay time due to operation over a temperature range) may prevent a product from intermittent failures. The VP warnings and errors help remove this type of risk from the project.
The VP accurately models interrupts and interrupt priorities, and because of the deterministic nature of simulation, a symptom is often repeatable in the VP but may be elusive on actual hardware.
In one example, the addition of a new interrupt to the system resulted in a second undefined interrupt problem. Investigation revealed that the undefined interrupt occurred when 4 interrupts became pending and the software developer had incorrectly added the new interrupt to the lowest priority. The repeatability and accuracy of the VP exposed the problem that would have been very difficult to set-up and produce on the bench.
A failure to meet real-time deadlines is easy to induce in the VP. In one example, the parameters for the write/program operation time were set to the specification limits for the EEPROM device model in order to verify the corner cases. The firmware caused unanticipated retry loops which cascaded to COP (Computer Operating Properly) timeout and reset of the system.
Another requirement situation was an incomplete motion of a window lifter. The physical bench used a spinning gear driven by a motor, and appeared to be working correctly because it rotated for a length of time.
The virtual prototype of the same system, however, translated the spinning gear into a representation of window travel, matching the ratios of the actual system. The software engineers were surprised to realize that the window travel was incomplete and the window stopped at 74% open.
While the error was simply an incorrect constant in the firmware, the system engineers were happy to discover the error prior to shipment to the OEM (Original Equipment Manufacturer) and installation of the bad body controller in a vehicle.
Testing using a VP can avoid problems with undefined silicon behavior. In a somewhat subtle situation, the MCU manual cautioned the user to disable a timer when switching the compare mode from interval count to capture mode.
The MCU did not really contain any type of interlock and thus the behavior for such an improper access was officially undefined. The silicon (and user manual) expects the software developer to avoid potential meta-stability issue in the silicon design, but does not really define (nor can it define) what occurs in actual silicon if the caution statement is violated.
The original MCU model detected this improper mode change and provided a system warning. In this case the model found a problem with the application code, but it also deviated from silicon behavior while meeting the specification. The original model refused to modify the timer compare mode and output an error message:
"Cannot modify timer capture mode while the timer is active, Change is ignored,"
The model deviation (refusing to change the mode) caused firmware that worked on the bench to be unable to command capture mode. Ultimately it was concluded that both the firmware and model must be changed, and the warning was modified to state:
"Timer capture mode should only be modified when the timer is disabled."
Reserved and illegal memory location errors/warnings catch a surprising number of firmware problems, sometimes as the canary in the coal mine detection of a complex DMA setup. Some actual examples include:
1) Incorrect address for DMA,
2) Access to a 16th PWM channel when only 15 PWM channels were present in the specific hardware variant,
3) Null pointer writes in communication stacks,
4) Invalid peripheral register accesses in pulse mode output software, and,
5) Write to a CAN timestamp read-only register due asymmetrical mailbox layout of the hardware registers.
In nearly all of these cases the illegal access had escaped other forms of checking such as code review, static analysis (lint), and bench testing. Lint tools do not have built in knowledge of the hardware memory map and bench testing often did not reveal the issue because of the benign or hidden symptoms.
In summary, the testability of the VP far exceeds the physical prototype. It allows better setup of experiments that would be otherwise unsafe or destructive. Warnings built into the models are nearly as good as a dedicated device FAE assisting in the firmware development. The VPs can self-check for several classes of problems such as:
* Improper device initialization
* Improper communication transfers
* Device driver deadlines not met, and
* Reserved and illegal memory accesses
Virtual Prototype Acceptability
In the context of this article, acceptability is the notion that the virtual prototype must be generally accepted for non-technical as well as technical reasons. Software developer productivity-increases due to visibility, controllability and improved quality are often intangible to decision makers.
It is easy to see a sophisticated test rack that occupies a busy lab space. Virtual prototype software licenses are hidden on developer desktops and may be only visible to management during budget season.
It is important to address virtual prototype dismissal reasons from users and management such as: its different, simulation is too slow, not as accurate as hardware, models are not available and its not real hardware.
An important consideration for the systems, software, and verification engineers is to provide the same tools in the virtual environment as used on the development bench. “The development bench starts as a virtual test bench with an emphasis on providing an operator interface and a scriptable test environment that will initially be used with the virtual prototype, then later deployed to the physical ECU” [1. Chandrashekar].
An environment that includes the debuggers, logic analysis, scripting and GUI environment that matches or at least mimics the development bench helps to address the its different concerns of the users.
Creating models with an appropriate amount of detail, yet retaining significant simulation speed, is a difficult problem that the modeling team must address. Generally, software developers require at least 10% of the real product speed in the simulations.
To achieve 10% to 50% of real product execution speed, the choice of model abstraction level is of higher importance than the choice of modeling language. The abstraction (detail) level choice is unfortunately beyond the scope of this paper, but the modeling team should always consider simulation speed as a top goal while building the models.
Losing sight of the speed goal, even on part of a project, can cost the entire use of the VP. Even the best modeling practices cannot overcome a slow computer, thus the use of modern, off the shelf PCs, ($1000) is strongly encouraged.
Development benches are more accurate, by definition, than the virtual prototype. Nonetheless, they are still subject to measurement and emulation errors induced by intrusive debug, in-circuit emulators and instrumentation cable impedances.
Virtual prototypes eliminate these errors, but are subject to accuracy issues where the model does not match the silicon behavior. This is often the result of creating the model from the specification, rather than a detailed gate or register transfer level.
Frequently, the issue of accuracy is really one of ambiguity where the silicon designer and the model designer simply interpreted the written specification differently. In most situations, investigation of the problem leads to a model correction to ensure compliance with the silicon. Frequently clarification of the specification is also required.
It is usually not acceptable to tweak the firmware to work on both the model and the real hardware. In the long term, the authors feel that executable specification methodologies applied to hardware design will improve model inaccuracies, but are beyond the scope of this paper.
An example testimony by one of our end users illustrates specification non-compliance in purchased, well exercised firmware:
“We found that the vendor-supplied CAN software was accessing a CAN peripheral register in 8-bit mode when 16-bit mode is the only officially recommended MCU access method for that register.”
This “discovery” has spawned conversations between MCU supplier and communication stack supplier, and should hopefully fix (or prevent) an issue that could have been very difficult to find empirically with real silicon without the console warning message.
While there is no known silicon issue with the current micro, there's no guarantee that there wouldn't be an issue with future MCU silicon had the same access method been used.
Thus, having accurate model of microcontroller (CPU and peripherals) and ASICs particularly a rich set of assert/constraint checks on access, usage, etc. is an extremely valuable tool for achieving ZERO defects in software.
This example shows how virtual prototypes offer an alternative development environment that complements the bench, instead of competing with the bench.
A final acceptability argument is that the models are not commonly available. Discussion of the business arrangements, company alliances, and model-simulator incompatibilities is often a function of industry vertical/horizontal integration and the authors acknowledge that model availability remains a significant problem impeding virtual prototype usage. In [3}] the author details representative modeling abstractions and creation time for a System on a Chip (SoC) platform.
Unfortunately, model development is often not planned or requested until a silicon error impedes firmware development. At this point the project is already hopelessly off course.
End-user mindset often aggravates the situation because schedules are built around hardware deliveries without software group involvement and risk assessment.
Industry focus on only the pre-silicon benefits of the virtual prototype also causes a lack of planning for model development because it perpetuates the myth that the virtual prototype has little value after hardware is available.
The authors hope that this discussion on the long-term benefits of virtual prototypes will encourage acceptance and help justify the business case for future model developments.
With respect to the “it's not real hardware” argument, the first virtual prototype was easier to accept when management could touch and feel a real product that had been developed in the environment.
The creation and verification of the firmware for an automotive rollover detection module was first accomplished using a virtual prototype. This first virtual prototype uncovered ten major software errors.
Firmware was then integrated into the physical controller in the last week of the schedule. Executing the firmware on the actual hardware uncovered one additional error due to a hardware model error.
The key demonstration, however, was an interactive GUI that could induce a car rollover event in a similar way as a scaled car model containing the same firmware in an actual product. The pilot project success overcame the inhibition that “virtual hardware” was “pretend hardware”.
In summary, VPs are much more than a bring up platform. They provide:
(1) Improved visibility of internal signals, inter-chip communication, and task/interrupt and operating system behavior
(2) Synchronous pause and restart of the simulation
(3) Scriptable control and fault injection
(4) Rapid deployment at reduced development bench expense
(5) Superior testability because it is a deterministic platform readily adapted for unit and regression testing
(6) Increased safety and access to multi-ECU system interactions
(7) Extensive warnings that monitor execution of the firmware and continuously compare to hardware design limitations
(8) Development and debugging that are easily deployable to global resources
By using VPs through the entire development process, higher quality products can be created in less time and the VP development cost is recaptured more effectively than if the VP is only used as a pre-hardware development vehicle.
An end-user summarized the use case when developing CAN drivers:
I was able to create different test cases which were difficult to do in bench and test cases were repeatable. Due to the warning messages, the debugging became easy; development happened faster and increased the confidence level of the code. Only after completing development in simulation was the final testing moved to the real bench.
To read Part 1 , go to The uses cases for virtual prototyping.
1. Adaptation of a Virtual Prototype for Systems and Verification Engineering Development, Chandrashekar, M.S.; B.C., Manjunth; Lumpkin, Everett; Winters, Frank; SAE Convergence 2008 paper 2008-21-0043
2. “Debugging Multiprocessor Code”, Engblom, Jakob, 07/21/2008 EE Times,
3. Fast Virtual Prototyping for Early Software Design & Verification, Garg, Amit, Dec-2006, IP-based SoC Design Conference.
4. Hardware-dependent Software Principles and Practice, Ecker, Wolfgang and Rainer D?mer; Springer, 2009. p. 28
5. “Hardware Virtualization for Pre-Silicon Software Development in Automotive Electronics”, Schirrmeister, Fran, Filip Theon, SAE 2008 paper 09AE-0314
7. Virtual Platforms for Software Development — Adapting to the Changing Face of Software Development, Serughetti, Marc, presented Nov-2005,
8. Virtual prototyping benefits in safety-critical automotive systems, Alford, Casey, Hanser Automotive, March/April 2006
Everett Lumpkin is Senior Function Design Methodology and Automation Engineer with Delphi Corp. He has 20 years experience in microcontroller development, microcontroller simulation, embedded software and independent test/verification. Since 2002 he has been the technical team lead providing virtual prototypes for automotive safety systems, powertrain, power electronics, and body computers. Everett holds a BS in Computer and Electrical Engineering from Purdue University.
Casey Alford is the Director, Field Engineering & Technical Services with Embedded Systems Technology. Casey has been creating virtual prototypes for 5 years (currently at Embedded Systems Technology and previously at VaST Systems Technology). Prior experiences include embedded software engineering, serial network drivers, and network protocol tools used widely in the automotive industry. Casey holds a BSE from the University of Michigan in Computer Engineering.
The authors wish to thank the follow people for their significant content contributions: Graham Hellestrand, Frank Winters, Patricia Hughes and Jakub Mazur.