Design Con 2015

Leveraging virtual hardware platforms for embedded software validation

Bill Neifert, Carbon Design Systems

June 25, 2008

Bill Neifert, Carbon Design SystemsJune 25, 2008

The increasing pressure on software-development schedules for embedded systems has driven many companies to adopt system prototyping strategies. Typically, these prototypes are built from real hardware either as a number of field programmable gate arrays (FPGAs) on a custom-built board or a pre-built solution such as a hardware emulator.

These hardware-based solutions suffer from a number of limitations, however. High cost, low debugability, and difficult-to-replicate corner cases all combine to limit the overall value of a physical prototype.

A new generation of prototypes is arriving to address these limitations and give software designers even earlier access to a development platform. Virtual hardware prototypes help pull software design earlier in the system schedule and cost less than their hardware equivalents. This article, the first in a two-part series, will discuss the merits of various virtual prototyping approaches. The follow-up article will include a case study that walks you through a virtual prototype's construction and use. Part 2 will appear on Embedded.com in June.

Why create a virtual prototype?
A virtual prototype is exactly what its name describes: a prototype of a system that only exists virtually, or in this case, in software. A properly constructed virtual prototype can perform most of the same tasks as an actual system. It can boot an operating system and run the driver code and application software. Some virtual prototypes feature mockups of the physical device itself and allow design teams to press all of its virtual buttons. A good virtual prototype is an invaluable tool to explore all of the facets of the system long before it's built.

The virtual world is easier to control than the real world. Corner cases that can take hours to set up with physical hardware often can be constructed in minutes in a configurable software environment. Once the test scenario is established, it can be repeated to allow for the multiple debugging iterations necessary to identify and fix system issues. If the designer finds that the problem lies in another area or with another designer's code, the virtual prototype can be sent to the appropriate engineer as an e-mail attachment.

Virtual prototypes are more easily debugged than their physical counterparts. When a problem is detected, it's a simple task to simultaneously halt the execution of both the hardware and software. Halting execution is not nearly as simple with real hardware and is sometimes even impossible. Once the system is stopped, debugging the virtual prototype is simpler than debugging the real hardware as well. Unlike physical hardware where visibility into the model is confined to the visible pins and the few registers that are made available via JTAG, virtual prototypes have visibility into all components of the system. In fact, virtual prototypes often find and help debug problems when the actual hardware has already been built.

How are virtual prototypes created?
A variety of options are available for creating virtual prototypes. SystemC is an IEEE standardized language targeted at system-level modeling. Armed with the language reference manual (LRM) and a freely downloadable proof-of-concept simulator, many engineers have created full system prototypes. Complete virtual-prototyping environments are also available from a number of companies. Environments from ARM, CoWare, Synopsys, and Virtutech tend to be the most common. These companies will supply models and services to help complete a virtual-system prototype.

Regardless of which virtual-prototype environment is chosen, however, all environments share one fundamental need: They all need models to represent the functionality of the various system components. Models are needed for each portion of the design touched by the software. While instruction-set models for processors are readily available from the processor vendor, obtaining models for the rest of the components in the system can be less straightforward.

Filling in the missing pieces
There are three options for obtaining models for the remaining components for a virtual prototype:

1. Designers can obtain them from an intellectual property (IP) or prototype vendor;

2. They can write the models themselves; or,

3. They can generate them automatically from the hardware implementation.

Many pieces of IP, especially commodity IP such as memories, interrupt controllers, and other simple logic have virtual models already created and readily available. Care must be taken to be sure that the IP model is compatible with the chosen virtual-prototype environment. Standards in this area have been slow to emerge and are not universally followed.

Manually creating the models for a virtual prototype helps ensure that the model will fit in with the virtual-prototype environment. Hand-generated models execute quite quickly and, if the model doesn't exist from the IP vendor or any other implementable form, it may be the only path to obtaining a virtual model.

Unfortunately, manually created models require a large effort to create and validate. A great deal of time and work may need to be invested to learn the requirements for the model, code it, and then verify that it properly executes the required functionality when integrated with the rest of the system. When the design requirements change, as they inevitably seem to do, the model must be changed and then revalidated.

Automatically created virtual models enable the designer to reuse components from previous designs. Model compilers enable you to compile the hardware design language used to implement the model directly into a high-speed, implementation-accurate virtual model that can plug directly into multiple prototyping environments. Automatically generated models have the benefit of complete implementation accuracy, which means that the only step necessary to account for design changes is a model recompilation after the register transfer level (RTL) is modified.

In many ways, a model compiler is just like a traditional software compiler. Its input is a language description that must be parsed and its output is an object or library file. As you may expect, though, many of the steps required to compile a hardware description are quite different from those taken when compiling software. The underlying theories, however, apply to hardware and software compilers alike.

As you can see by looking at Figure 1, a model compiler starts by parsing the RTL description of the hardware module.

This parsed description is then subjected to a number of different optimizations. These are initially performed on the individual logical blocks inside of the hardware module. The optimizations range from simple algorithms, such as removing dead logic and constant propagation, to more complicated approaches such as duplicate-logic detection and replacement of certain logical operations with more efficient equivalents.

Once each of the blocks has been subjected to local optimizations the compiler logically elaborates the entire module. This elaboration phase exposes numerous additional optimization opportunities as the same algorithms that were applied to the individual logic blocks are now applied to the module as a whole. After the logic has been completely optimized, it's scheduled into a set of execution sequences depending upon when it's needed by hardware system.

For example, if a block of logic is only ever accessed by logic that runs at 50 MHz, there is no need to constantly recalculate that logic based on the 250-MHz logic that may feed into it. These execution sequences are finally mapped into code streams, which can be compiled into a single linkable object file.

The hardware object itself is really only a part of what is needed by to create a useful virtual model. The object typically requires an integration layer to map its RTL-derived interface into one which is needed by the virtual platform. The level of abstraction represented by the virtual platform typically dictates what type of interface is required.

A virtual prototype that is composed of cycle-accurate components will likely need only a simple adapter to map the pins of the hardware object into the data-types used by the platform, while virtual prototypes written at higher levels of abstraction will typically require a transactor to map system-level transactions into the multiple clock cycles of pin transitions necessary to model this behavior in hardware.

A model compiler can often take advantage of its knowledge of the logic contained in the design to automatically generate the wrapper code and transactors necessary to link the hardware object into the overall system and create a complete virtual model. This approach can be leveraged to target the same hardware object into multiple virtual prototypes at varying levels of abstraction depending upon the needs of user.

How fast will it run?
Once the virtual platform is assembled, the ultimate question is how fast will it run. The answer depends upon the level of accuracy of the models that comprise the platform.

A platform consisting entirely of abstract models will execute quickly--sometimes faster than the real system--and allow for most high-level applications to be rapidly developed and debugged. Unfortunately, abstract models don't have sufficient hardware detail to allow lower-level software such as firmware and drivers to be fully developed and debugged.

On the other end of the spectrum is a virtual platform constructed entirely from the hardware implementation model. The entire hardware description is simulated in conjunction with the system software. This has the implementation-level detail necessary to develop all of the software, but it will execute far too slowly to be of use for all but the most patient software engineer.

An alternate approach combines the best aspects of both approaches. A hybrid virtual prototype combines the speed and versatility of the completely abstract model with the hardware accuracy of the implementation-level model.

In a hybrid virtual prototype, the processing elements are typically represented using an instruction-set simulator (ISS). An ISS will typically execute the same binary executable as the real processor but will do so in a completely virtual manner. The remaining components in the hybrid virtual prototype are modeled at the abstraction level that best meets the goals of the software developer.

In Figure 2, the memory subsystem is modeled in an abstract manner that enables the overall system to execute as quickly as possible but at the expense of cycle-level accuracy. The remaining components in the system were generated automatically from the RTL description.

View the full-size image

This combination maximizes the execution speed of the overall system but still delivers hardware accuracy for the components being addressed by the software.

A hybrid virtual prototype can be constructed and reconfigured to meet the needs of the various constituent players in the validation lifecycle. Early in the design cycle of a system, the architect will typically require cycle-accurate components to make design decisions and perform design tradeoffs.

At this stage, the virtual prototype may use a cycle-accurate ISS for the processor and an automatically generated model of the memory controller in order to ensure that the latency and throughput requirements for memory accesses are being met. As the design firms and software engineers begin using the virtual prototype for software development, the cycle-accurate ISS and memory controller may be swapped out for less accurate but higher speed implementations.

Other components in the system may be modeled at a high level of abstraction to deliver the fastest execution speeds or be represented using automatically generated models to deliver the greatest accuracy.

In many cases, abstract models and automatically generated models for the same component can be interchanged to enable a virtual prototype to represent speed or accuracy for each component depending upon the needs of the individual programmer.

A driver developer may require a hardware-accurate model for certain components and only require abstract representations for the others. A virtual platform with both hardware-accurate and abstract models for various components allows for the straightforward tradeoff of speed versus accuracy as the software development needs evolve.

The actual platform execution speed in this hybrid approach typically varies depending upon the accuracy of the overall platform but can reach well into the MIPS range. More important than the execution speed, however, is the debug throughput that can be achieved. The versatility, visibility, and wider deployability of a virtual prototype enable software bugs to be found and fixed substantially faster than is possible with a physical prototype.

The value of virtual prototypes is not confined to early software development. There are multiple places where they can add value to the system design lifecycle. An accurate virtual platform running real system software is an invaluable tool for the system architects who were some of the methodology's earliest adopters.

The virtual environment enables architects to play out various scenarios and configurations to easily profile execution and identify problem areas. Armed with this data, they can confidently make important architectural tradeoffs in areas such as bus-widths, memory latency, arbitration schemes, and a host of other configurable areas.

Power estimation
Power usage is increasingly in the forefront when design decisions are made in modern systems, and virtual prototypes are playing an important part here as well. Virtual models automatically generated from a hardware description can generate waveforms. These waveforms are then utilized by power estimation tools to accurately profile the power consumption of system components while running software.

This flow is demonstrated in Figure 3.

A virtual prototype is executing system software. As this software interacts with the compiled components in the system, they generate waveforms that represent the true hardware behavior of the system. These waveforms can then be passed into power-estimation tools to generate a power profile of the system while executing software.

Power estimates generated while running software are more accurate since the system resource usage matches what will occur in real silicon. This same technique can be used on chips that have already been manufactured to profile power usage while running software. This enables software-driven power optimizations without the need to modify the hardware.

Many things, many people
The flexibility and adaptability of software-based virtual prototypes allow them to be many things to many people. The system architect can have a cycle-accurate platform to make early design decisions. The driver developer has a platform containing accuracy where it is needed for driver development but speed where it is needed for debugability. Finally, the high-level software engineer has a functional platform for developing and debugging application software.

We've examined a number of different approaches to virtual prototyping in this article. A follow up article available on line at Embedded.com will present a real-world application of virtual-prototype creation and use for software development.

Bill Neifert is a cofounder of Carbon Design Systems. He has 13 years of experience in electronics engineering with over 10 years in EDA. He has designed high-performance verification and system integration processes for many companies and an architecture and coding style for high-performance RTL simulation in C/C++. Bill has a BS and MS in computer engineering from Boston University. He may be reached at bill@carbondesignsystems.com.

Loading comments...

Parts Search Datasheets.com

KNOWLEDGE CENTER