Adopting virtual prototyping

Virtual prototyping platforms provide fast, fully functional software models and enable software engineers to develop production quality code much before the hardware can arrive in the lab. In theory, such prototyping platforms should facilitate efficient communication between the hardware and software teams and result in faster time-to-market with higher quality products. In practice, however, teams often fail to realize much needed efficiencies in communication and quality.

In most cases, teams adopt virtual prototyping in an ad-hoc manner, resulting in sub-optimal productivity gains, often leading to abandonment of their approach. Ad-hoc approaches can result in models quickly becoming out-of-sync with their RTL implementations and leading to significant loss of effort in debug related issues. In this paper, we present a case study involving the development and debugging of firmware and application codes on a virtual system prototype for a USB controller. We share recommendations for effective development and testing flow that have proved effective in this case.

One of the challenges of effective virtual prototyping is to ensure that the developers create the models appropriately for their intended applications. For example, a loosely-timed model is suitable for software validation while an approximately timed model is more suitable for architectural validation. Model developers should understand the device architecture and functionality in detail so that the device behavior can be correctly abstracted. When used as reference models for verification, since the loosely timed models are typically not accurate, the model-predicted behavior often differs from the corresponding RTL implementation. This requires us to plan how to synchronize the model with the implementation, without compromising on the prediction quality. The model developer must also pay attention to memory leakage avoidance and unnecessary context switching.

Another challenge is planning for effective debugging and testing of the models. One must consider testing the model both as a standalone object and as the component of a virtual SoC. Parameterized debug messages should be available and one should be able to enable or disable them at any stage of the model execution. Teams should perform both code and functional coverage analysis of the models to have greater confidence in their robustness. There is also a recommended order of priority to test the features. For example, when considering USB device models, one should start with the testing of register access, basic data transfers, OS device detection, and drive formatting. Only when these simpler sets of tests succeed, one should attempt complex application level tests such as speed and firmware upgrade behavior.

In summary, the case study presented here captures our experience and general recommendations for developing models and adopting a virtual-platform based development flow. Using this approach, we can significantly improve the quality and efficiency of the hardware/software integration phase of product development.

I. Introduction

Virtual Prototyping (aka ESL – E lectronics S ystem L evel design) is becoming increasingly popular due to the following key reasons: 1) Increasing system complexity involving both hardware and software, 2) Shorter schedules for hardware and software co-verification in which traditional cycle-accurate system approach does not scale up, and 3) Increasing risk in terms of development costs and time-to-market.

Virtual Prototyping helps the system designer explore potential system architecture early in the design cycle. More significantly, it helps the software team to start developing related applications and drivers in parallel with hardware designs and verification. In the traditional approach, the software is tested only after the hardware is available in the lab. The ESL approach on the other hand, enables much earlier testing of the production software with models of sufficient accuracy. This approach helps the software engineer catch software bugs early on and the hardware designer can furthermore, use the ESL models as an executable specification, thus increasing overall team productivity.

II.  SystemC+TLM2.0 Modeling

Models are developed using SystemC+TLM2.0 [1], providing several key benefits like:

a)                  SystemC which is an IEEE Standard (1666-2011) and is open source, allowing a vendor and platform-independent approach.

b)                  SystemC models time similar to other hardware description languages such as VHDL/SystemVerilog, and is able to model complex timing interactions within the system.

c)                  TLM-based approach offers a significantly higher execution speed compared to the other event-based simulation approaches.

d)                  SystemC/TLM based ESL approach is implemented using C/C++, which facilitates software integration and interoperability among multiple vendors.

For ESL applications, SystemC+TLM2.0 models can be divided into two main categories: (1) Loosely-timed and (2) Approximately-timed. Selection of the correct modeling category is based on the model use case, as mentioned below, in Table 1.

Table I. Model coding styles 

Use Case

Coding Style

Software application development

Loosely-timed

Software performance analysis

Loosely-timed

Hardware architecture analyses

Loosely-timed or Approximately-timed

Hardware performance verification

Approximately-timed or cycle-accurate

Hardware functional verification

Loosely-timed or Approximately-timed

In this paper, we will share our experiences and recommendations based on the loosely timed USB2.0 device model development and testing flow applied to the SoC application driver development. We also discuss the challenges faced and our recommendations at of each step of the model development and testing with examples.

III.  Modeling Development & Testing Flow

Model development & testing flow is a three-step process (Figure 1):

A.  “What & How to model”: Specification understanding and feature extraction

This is the first and most important step in model development. Since the developed model is abstract, we initially assumed that the application would not require a physical layer implementation, only the higher layers. We thus implemented the model without any PHY features. However, while it seems rather obvious, in hindsight, when we ran the application, it failed to run since the physical layer register got implemented and some basic PHY functionality was missing from the model. To address this issue, we developed a separate spreadsheet with information on the implementation statuses of each register in the model. This spreadsheet was then reviewed by the application engineers and based on their suggestions, we implemented the physical layer with limited logic and register functionality. After this experience, we added an additional step in our model development flow to avoid any such types of erroneous implementations. To sum it up, we recommend adding a detailed documentation step that clearly captures the model requirements and a top-level development architecture at this step.

B.  Model development

After getting clarity on the specifications and features, our next step involved the commencement of model development based on defined architecture and use of recommended design techniques. Later in this paper, we describe some of the design techniques in detail.

C.  Model testing

Model testing included: 1) standalone model testing, and 2) virtual platform testing. Like traditional verification methods, code coverage and function coverage aid in gaining confidence on the model. For quick debugging, various smart debugging techniques were adopted, including configurable message verbose levels, effective use of debugger’s breakpoints, etc.

Figure 1. Model Development and Testing Flow

IV.  Specification understanding and feature extraction

Along with a deep understanding of the specifications, the developers had explicitly listed all the features that need to be modeled. Initially, we started modeling without a separate feature extraction step. The lack of this step led to missing feature implementations and general unpredictability in deciding the required effort for modeling and testing. To address this anomaly, a separate spreadsheet was created to capture the extracted features and track the development and testing effort for smooth execution. For example, for the USB2.0, device model, features to be implemented includes support for types of transfer (Bulk/Interrupt/ISOC/Control), FIFO depth (max packet size), number of endpoints, device test mode, etc. Along with the features, required registers details have to be captured in this document.

Once the features are identified, the developers must explicitly mark the features that need to be modeled. For example, if the end user does not want to use ISOC transfer support (ISOC endpoint), then the ISOC features need to be marked as “Not required.”

As this is abstract modeling, physical layer (power, speed) related features should be excluded unless necessary. In contrast, some applications require PHY-related bits implementation (i.e. device detection status, etc.), then the corresponding register bits should be marked as “Required.” Even though the physical layer features are not generally required for modeling, in some cases the corresponding PHY status-related behavior still need modeling for the specific applications to work.

Finally, the developer needs to create a functional requirements document, which contains details regarding the required and not-required model features, the model architecture. At the end of this step, all users (client, model developer, model test engineer) should have complete clarity related to the model design, which will help avoid confusion and queries in model development and testing cycle.

V.  Model development

Model development can be divided into three major tasks:

A.  Register development

Register implementation consumes significant amount of time in model development. For a USB-like model, register implementation consumes almost 50% of the model development time. Each register field requires separate handling based on its attribute, which may lead to several mistakes. Instead of creating a separate register database for each model, we recommend creating a register database/library class, which includes register class, functions to read/write access and reset register based on the attribute (RW, RO, RW1C, etc.). Using smarter techniques for register development (such as .CSV and Perl script based register model generation) saves development time.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.