Shrinking silicon geometries enable larger SoC-type designs in terms ofraw gate size, and many of today's applications take advantage of thistrend. An important point that is often missed is the accompanyinggrowth in verification complexity.
Indeed, the verification task for a design that is twice as big isactually more than doubled. The verification team has to deal with abigger statespace and the application, which is what the verificationenvironment attempts to mimic, gets much “bigger”.
Simply building faster tools like simulators will not solve thisproblem. Rather, it requires capabilities and associated methodologiesthat make it easier to set up complex verification environments -environments that in the end ensure that the application on the chipworks as expected.
Each of the three key aspects of SystemVerilog has a significantrole. The synthesizable design constructs that have been added toSystemVerilog make it possible for designers to code at a higher levelof abstraction, often mapping more accurately to the function they aredesigning and the way they think about it.
The new assertions capability allows users to very conciselydescribe a behavior that needs to be checked. But it is theverification aspect that provides the biggest bang for the buck, asevidenced by its rapid adoption.
The verification component of SystemVerilog brings high-levelprogramming capability to design and verification teams. In the past,many teams used a C/C++ testbench, native or
SystemVerilog brings structure to this process by providing astandard object-oriented language with which to do the same. Tools cannow be developed to support a more standard, structured process in away that is not intimidating to the engineers who previously coded in
The relatively simple notion of constrained randomization allowsengineers to develop sophisticated test scenarios with very few linesof code. It is also a natural progression for the object-oriented modelto spur standard class libraries and related
Both of which enable engineers to create modular, reusableverification environments in which components communicate with eachother via standard transaction- level modeling interfaces.
It also enables intra- and inter-company reuse through a commonmethodology and classes for virtual sequences and block-tosystem reuse.This reuse can be extended to off-the-shelf verification IP componentsthat can be used to verify specific functionality such as bus protocolslike USB.
The first thing that comes to mind when engineers examine or debug thedynamic behavior of their designs is waveform. Some debug tools havetaken behavior analysis to a significantly advanced level by allowingengineers to examine dynamic activity within the context of the sourcecode itself and to trace a specific behavior back in time with the pushof a button.
This analysis relies on the well-understood notion of easilyrecording (dumping) value-change data from simulation. The data isusually recorded in a highly optimized, dedicated database, such as theFast Signal Database (FSDB).
Once the simulation data has been recorded, tools accessing thisdatabase can provide specialized views and engines that automate andmake more efficient the process of evaluating and debugging designbehaviors. When debug tools also have access to the design source code,they can put two-and-two together to automatically trace to the rootcause of problem behaviors. This state-of- the-art in design debug andanalysis is well accepted today and continually evolving.
Unfortunately, this process is not applicable to testbenches. Tostart, there is really no concept of waveforms or value-changes inprogrammatic testbench code.
Instead, SystemVerilog testbenches have classes that can be createdat any point in time with functions that are called to perform aparticular task (such as drive arandom transaction into the design ). Most of these functionsexecute in zero-time. Hence the notion of value-changes andrepresentation of these changes using traditional waveforms do notapply, at least not directly.
The SystemVerilog verification component is, for all intents andpurposes, a software language. Designers and verification engineersalike rely on debug tools to understand how the design and verificationenvironment is set up. Traditional hardwaredescription languages (HDLs) are highly structured, and assuchcan be easily represented hierarchically in schematics or statediagrams.
Not only are these contextually appropriate for the task at hand,but they present information in a way that makes is possible forengineers to more easily comprehend.
By contrast, software programs like SystemVerilog and C++ haveclasses that are created, instantiated, and extended everywhere. Forengineers, especially those that come from the hardware domain, it isno easy feat to make sense of it all. Thus, the burden now falls todebug tools, which are tasked with inferring data and creating staticviews that are both useful and intuitive.
The obvious next question is: how are these verification challengesbeing addressed today? Studies show that SystemVerilog is becoming awidely adopted element of verification (testbench) methodologies.Today, there are two primary strategies employed to help engineerscomprehend, analyze, and debug SystemVerilog testbench environments.
One approach uses the built-in support in the language for logginginformation. The two constructs employed are $display and printf , whetherused directly or through a pre-packaged class library such as OVM orVMM.
Both allow engineers to log information to text files. The wholeidea is to record some history into these log files, which can beanalyzed after simulation to get a sense of what the testbench wasdoing through time. Remember, the design data can be recorded into thedebug database for visualization and analysis in a debug tool.
However, for the testbench data, engineers must revert to thelow-level text file logs, and then manually (and painfully) correlateto what the design is doing on the time axis. The result is a disparateflow that relies on low-tech, text-based recording of testbenchactivity as illustrated in Figure 1below .
|Figure1: To help engineers comprehend, analyze, and debug SystemVerilogtestbench environments, built-in support for logging information isuseful. However, the result can become a disparate flow as shown.|
Another strategy often employed by engineers is to use thesimulator's interactive capability in a GDB debugger-like fashion. AswithGDB debuggers for C/C++, engineers can set breakpoints as well asinspect variable values and stack traces at a particular time.
However, there are several drawbacks with this approach. First, theengineer has to know when and where to set breakpoints, so that thesimulator stops at the simulation time and/or condition targeted forfurther probing. Often, this involves guesswork and requires severaliterations to get to the exact point.
Moreover, to get to the breakpoint, the simulator run could takehours, or even worse, days because it has to simulate the wholeenvironment up to that point. It is often not practical to consumevaluable engineering resources waiting for the simulator to reach thedesired breakpoint.
Borrowing from software
Now, let's discuss how strategies employed in the software domain canhelp engineers meet head on the challenges of testbench verificationand debug. It is clear that simply extending the traditional hardwaredebug techniques to testbench debug is not sufficient or even feasible.
Gaining insight into what is going on in the testbench duringsimulation requires a new approach that builds upon the logging andinteractive concepts previously discussed. The key is to make thelogging process much more sophisticated and automated so that most ofthe debug and analysis of testbench activity can be done at that level.
The goal is to use an advanced logging mechanism to pinpoint thelocation of a problem. If the problem is identified to be on thetestbench side and more details are needed, engineers would then gointo a tightly integrated interactive mode.
Logging has been widely used in systems and software. For example,operating systems log information all the time for later analysis anddebug if needed. Similarly, most software systems log information. Soit is no surprise that logging is a key pillar in SystemVerilogtestbench debug and analysis.
The dominant SystemVerilog methodologies in use today provide somebasic libraries that enable users to log information from theirtestbench. However, the problem has been in visualizing theinformation, whether instrumented using raw SVTB syntax like $display and printf or specialized base classes. All logging done through thesemechanisms typically ends up in text files.
|Figure2: Shown is a flow based on logging user-instrumented information intoan FSDB database accessed by Springsoft's Verdi Automated Debug andSiloti Visibility Automation systems.|
In order to make debug of the design and testbench together apractical, efficient process, the logging mechanism must be flexible interms of usage and the resulting output automatically captured in thesame debug database as the design results (such as the de-factostandard FSDB format). This is fundamental to enabling advancedvisualization, debug and analysis functionality. The proposed flow andusage are shown in Figure 2 above.
The task to log information – for example $fsdbLog – needs to behighly flexible, allowing engineers to insert it anywhere in theircode, including existing base class libraries that are intended forlogging.
The logging task must not only capture messages, but alsoseverities, variable states, etc. as properties or attributes of themessage. In addition, the call-stack must be automatically captured toleverage in further debug automation.
The upside of this approach is that since all the data goes into thesame debug database as the one used for HDL recording, visualizationsupport can be added to the debug system to analyze this loggedinformation alongside other data, such as HDL value-change andassertion states. The net result is a unified system that enablesengineers to observe what is going on in the entire environment.
|Figure3: Logged data can be visualized in waveforms as well as in spreadsheetview.|
As shown in Figure 3 above ,the data is visualized in standardwaveforms as well as via specialized applications such as atime-synchronized table view which, like a spreadsheet, can befiltered, configured, etc. Special-purpose features can be added tothese views to help engineers easily identify messages of interestamong the logged data.
For example, advanced filtering and highlighting can be used tofilter or colorize specific messages based on some condition (e.g.,highlight in red any messages that have “ERROR” as their label and”address=5″ ). Logged message viewing applications could alsoenableengineers to quickly search and locate messages that matchuser-specified search criteria.
Logging to debug
The automatic capture of the call-stack during logging provides uniqueopportunities for further automating debug. For example, a loggedmessage can be synchronized with the source code using drag-and-dropfrom the waveform to the source code, which could then jump to wherethe message originated.
In addition to the obvious comprehension advantages of thiscapability, it can also be used to quickly set breakpoints at the rightplace to drive interactive simulation from the debugger. Despite thedrawbacks discussed earlier, interactive simulation is often the onlymechanism available for delving into the details of testbench code.
While logging can provide a coarse high-level view of testbenchactivity, interactive simulation of testbenches can provide theGDB-like data that is required to understand their behavior, such asthe values of variables at a specified point in time and detailedthread information.
|Figure4: The use of a unified and full-featured debug system to driveinteractive testbench simulation allows for more user-friendly setup,visualization and analysis of results.|
Most simulators, when invoked in interactive mode, typically haveaccess to all this information, albeit in a more primitive manner. Bybridging the ability to log messages with a unified design testbenchdebug system, engineers can effectively use logging at the outset todetermine the testbench code (location and time) that needs to beanalyzed in more detail.
With the flow shown in Figure 4above , a logged message can bedragged-and-dropped into the source code view so engineers can set abreakpoint, and then invoke interactive simulation in the backgroundwith the source-code view of the debugger serving as the mastercockpit.
In this way, engineers can drive the simulator to a specific time orbreakpoint, so that values, call stacks, and thread information can beinspected (automatically or user-driven). This mode of operation isvery similar to the GDB use model deployed by C/C++ programmers.
There are several compelling advantages of using the debugger todrive the simulator and display its results. Engineers can use the sameenvironment to debug and analyze the behavior of the design as well asthe testbench message logs.
Additionally, debug environments provide a more user-friendly andfamiliar environment to drive, view, and analyze the testbench itself.For example, as shown in Figure 4 ,having variable watch and stackviews alongside the source code can greatly enhance the user experiencewhen debugging testbench code.
As discussed, by leveraging the testbench capabilities ofSystemVerilog, engineers can create more sophisticated scenarios totest designs, while at the same time increasing coverage. But, on theflip side, the task of understanding the structure and function of suchcomplex testbenches can be daunting.
<>Debuggers have always excelled at providing a platform forcomprehending HDL source code. Commonly-used features, such as designbrowsing with an instance-based hierarchical representation and tracingof loads and drivers, are built upon a knowledge database that isautomatically extracted from the source code. While some of this samefunctionality can be extended to testbenchcode, the more exciting opportunity lies in building on thisknowledge-driven foundation to take testbench comprehension evenfurther.
Again, many of the ideas proposed here take advantage of practicesthat have already proven to be successful in the software domain. Forexample, we've discussed the drag-and-drop of messages captured duringsimulation to the source code and the automatic identification of thecode from where the message originated.
These help to close the loop between the source code and simulation.Design code is typically built hierarchically with lower level modulesinstantiated at higher levels and some modules instantiated multipletimes. Conceptually, this can be represented in a treelike fashion fromthe top-level module all the way down to the lower-level modules.
Testbench code however, like C++ and other object-orientedlanguages, is primarily made up of declarations of classes, functions,and variables. During testbench debug and analysis, engineers want aquick way to navigate to a class, function, variable, or the newerSystemVerilog constraint and coverage code.
|Figure5: An instance-based hierarchy representation and UML-like classinheritance and relationship view are critical to SystemVerilogtestbench code comprehension.|
Debug and analysis tools have to be able to import this type of codeand display a meaningful representation that takes into account thedeclaration- centric nature of testbench code (Figure 5 above ). Thishierarchical representation must also be linked to the actual sourcecode so that when a class, function, or other entity is selected, thecorresponding source code is also displayed.
Given the object-oriented nature of SVTB code, engineers can easilyreuse existing code and create reusable code themselves. Classes areoften derived from existing base or parent classes. This inheritanceallows them to retain all the capabilities of the parent while at thesame time allowing for variables or functions to be replaced with newones, or entirely new ones to be added.
While declaration-based views can be enhanced to show some classhierarchy, most classes have complex relationships with other classes,particularly as engineers understandably take advantage of SVTBobject-oriented-ness (reusability) in its purest sense.
To represent this 'organic' nature of classes, the concept of
Chips are getting bigger with over 100 million gates and approachingone billion transistors. This creates an astronomical challenge for theengineers trying to comprehend the complex structure and behavior ofthese designs and the surrounding verification environment used toverify them.
Not surprisingly, testbench creation is becoming a vital part of thehardware veri- fication flow and as complex as the chip designsthemselves. As a result, engineers are turning to the SystemVeriloglanguage to address the advanced requirements of designing andverifying designs of this scale.
The SVTB component provides a higher-level software-like environmenttargeted specifically at verification, enabling engineers to increasetestbench coverage within the same language and infrastructure. And,while its object-oriented nature provides powerful capabilities, SVTBdebug requires software-like tools in order to comprehend the complexclass inheritance relationships that users will ultimately develop totake full advantage of the language.
This convergence of larger, more complex designs andSystemVerilog-driven verification methodologies not only requires moreEDA tool performance and capacity to scale with design size, butadvanced levels of automation to deal with the abstract and dynamicnature of testbench verification and debug.
Fortunately, the sophistication of existing HDL debug and analysisplatforms provides the bridge for integrating new innovations thataddress the unique requirements of comprehending complex testbenchbehaviors.
Paramount in this scenario is the notion of message logging fortestbench activity during simulation, coupled with flexible mechanismsfor recording into specialized databases.
This process is fundamental to enabling advanced visualization andanalysis techniques, on-demand calculation of design values, andseamless transition to interactive simulation for more detailed GDBlikeanalysis of testbench code.
Bindesh Patel is TechnicalMarketing Manager and Amanda Hsiao isTechnical Manager at SpringSoftUSA.