Virtual prototyping gives embedded developers flexibility in a changing consumer environment -

Virtual prototyping gives embedded developers flexibility in a changing consumer environment


Today's typical advanced wireless-enabled embedded product — whether it is a mobile phone, a connected consumer device such as a gaming platform or a Zigbee/RFID-enabled industrial controller — needs a wide variety of intellectual-property (IP) blocks to provide the functionality demanded by the market.

High-end embedded devices already contain many microprocessors (MPUs) and digital signal processors (DSPs) to provide advanced (2.75G and 3G) modem and application processing, as well as WiFi, GPS and Bluetooth functionality (see Figure 1, below ). Included in these blocks are digital signal processors (DSPs), processor cores, complete subsystems, modems, and multimedia blocks. Numerous interfaces ranging from USB to Bluetooth and Wi-Fi will also be contained in the blocks. For software development, standard operating systems need to be extended to include new hardware capabilities. Now, earlier in the development cycle than ever before, new applications need to be developed and validated.

Figure 1

An ideal solution for such a project must permit the development team to perform the software tasks before the hardware or a physical prototype is available. The software development team needs a software-based prototype, called a virtual prototype, on which to perform the various software-development tasks without a real hardware-based prototype. This virtual prototype must be a full-functional software model of the complete target system. The virtual prototype must be available many months prior to a hardware prototype. In fact, the ultimate prototype should be available even before the architecture has been fully frozen and kept in sync as the design develops over time.

Virtual prototype built on a software model
Building a virtual prototype entails building a software model of the target product and its corresponding high-level testbench. Typically, high-level functional blocks, which are related to the various IP blocks, are used along with models for their interconnect fabric. The interconnect structure is critical in the hardware because multiple data streams (for example, a video stream and voice stream) need to be handled concurrently–possibly in conjunction with other communication activities. In the virtual prototype, the structure of the interconnect models is critical to the simulation performance, which is important for the productivity of the software development team.

From a software perspective, real-time requirements need to be merged with application needs under the control of a real-time operating system (RTOS). The virtual prototype must model the detailed timing behavior of the interconnect fabric (typically on a cycle-by-cycle basis) in order to be able to answer critical questions concerning the performance of the architecture. Very fast and functionally correct models may strip away too much of the timing behavior to provide such answers. On the other hand, accurate cycle-by-cycle modeling of the interconnect fabric may have a severe impact on the overall simulation performance and therefore the software development team’s performance.

Fundamentally, designers need a solution that lets them choose the level of detail that will be used to model the interconnect fabric. The required details will depend on the answers that they hope to uncover by simulation. If there are questions about the interaction between the application processor and the modem subsystem, for example, that part of the chip needs to be modeled at a detailed cycle-accurate level. Yet other parts can remain at a functional level.

Using a mixed-level modeling methodology
A mixed-level modeling methodology and supporting tools allow designers to build a virtual prototype of the target design. They can connect it to the stimulus/response environment (testbench), load a set of software executables (one for each microprocessor and DSP), and simulate at very high speeds. The tools should allow the virtual prototype to be configured easily for different types of measurements and development needs.

One of the keys to mixed-level modeling is that the same stimulus environment (testbench) and software can be used to simulate the design–regardless of the configuration of the virtual prototype and the abstraction levels that are used to model the target system.

Figure 2

In the solution shown in Figure 2, above , the individual blocks of the virtual prototype would typically be modeled on an abstract functional level in C or C++. They would then be connected through SystemC (see Figure 3, below ). The external interfaces are modeled on an abstract functional level to communicate with the testbench.

Figure 3

Designers aren’t interested in the details of individual IP blocks at the system level, which makes this modeling style works for most designs. These blocks are typically reused from previous projects or acquired from an IP provider. Instead, designers are interested in creating functionally accurate data streams in the testbench. They want to explore how these data streams are handled by the virtual prototype while looking at what impact they have on a common bus or other resources. For example, the details of the USB block aren’t critical. But getting pictures downloaded through the USB interface and measuring performance is important.

Modeling the external interfaces such as USB at a high level of abstraction ensures the speed required by software developers and allows them to incorporate real-world data streams. Designers can actually make use of the host workstation hardware, such as the USB interface, and connect it to the USB port in the virtual prototype. Real-world data sources and sinks, such as displays, can then be connected to the virtual prototype. All the data sources and data streams can be used during software development or architectural analysis.

Fuctional model is at the heart of mixed-level approach
At the heart of this mixed-level modeling technology is the connection of functional models to cycle-accurate models of the interconnect fabric. For instance, Synopsys and Virtio have developed a proprietary wrapping technology that allows a high-level function call to be annotated with information regarding cycle count. That function call can then be mapped to a series of calls to the cycle-accurate SystemC application programming interface (API) of the cycle-accurate interconnect model. The simulations using IP blocks that are modeled in this manner are cycle approximate rather than cycle accurate. Yet they yield sufficiently specific information to make architectural decisions when connected to a cycle-accurate bus.

Another key requirement for software development is the ability to interface and work with common software debuggers, which is addressed by such solutions. Because software developers have different debugger preferences, a mixed-level verification environment has to provide interfaces to multiple software debuggers. In addition, these interfaces have to operate together with the debuggers in the hardware and verification domains.

For software developers it is advantageous to use a virtual prototype for multi-core debugging because JTAG scan chains only stop one processor at a time, but with a virtual prototype all cores can be stopped when either a software or hardware breakpoint is reached. This form of debugging allows unsurpassed visibility into the complete system and greatly increases the software developer’s productivity.

Once a virtual prototype has been defined and handed off to a software-development team, it is of key importance that the actual hardware being developed remains consistent with this virtual prototype. Clearly, a lot of software work could be wasted if the hardware diverges from the prototype.

Together, the software and the testbench form a superior verification environment to test user-definable configurations of the target system’s mixed-level model. This model then gets refined and gradually replaced by RTL representations of the various blocks. At this point, the same verification environment can – and should – be used to ensure consistency between the high-level model and the RTL model of the target system.

For performance reasons, it’s likely that not all software can be run on the RTL model. However, the verification team can carefully craft a set of tests covering all of the functionality that should run and pass on the following: a high-level functional model, a more detailed architectural model, and the design’s RTL model. Such a set of tests constitutes a regression suite. This suite will spot any divergence between the RTL model and the virtual prototype used by the software developers to validate and develop their applications.

This approach allows software development and hardware development to truly happen concurrently. The process ensures that by the time a hardware prototype can be created, the software is available to be downloaded and tested for integration. All of the critical components have been analyzed up front and most of the software has been developed. In addition, the consistency between the hardware and software has been monitored throughout the process. As a result, the final integration phase becomes merely a validation step with a very high probability of first-time success.

Rindert Schutten is Director of Marketing, System-Level Solutions, and Tom Anderson is Director of Technical Marketing, at Synopsys Inc. and Filip Thoen is Chief Technical Officer at Virtio Corp.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.