Dealing with SoC hardware/software design complexity with scalable verification methods - Embedded.com

Dealing with SoC hardware/software design complexity with scalable verification methods

The problem with today's existing methodologies is that verification issubservient to design. This principle requires a shift in paradigm,especially in designing complex electronic systems.

Why?

Functional errors mainly cause design respins. Functionalverification and the process for finding these errors create thebiggest bottleneck in the design flow. And in general, verificationconstitutes at least 50 percent of all design activity. Nonetheless,verification technology is falling behind design and fabricationcapability, widening the verification gap andlimiting designer productivity and design possibilities.

Figure1. Theverification gap limits design potential and possibilities.

To close this gap, verification must become an integral part of theoverall design methodology. The whole design and verification flow mustbe structured based on what's good for design engineers andverification engineers. This can impact design partitioning, blocksizing and design rules that are taken for granted today.

Another challenge to successful system verification relates totestbenches. Note that an increase in design size exponentiallyincreases verification complexity.

Simulation capability tracks with design size, but the complexity ofthe testbenches does not, partly due to the dramatic effect that sizehas on the observability and controllability of the design. Suchincreases the number of required tests, which may take a long time toimplement. Thus, it becomes more difficult to know why things go wrong.

Current methods
Over the years, design and verificationmethodologies have been developed to ease significant designbottlenecks. Most focus on handling complexity and on dividing thedesign process into manageable segments. To achieve this, mostcompanies have adopted some variant of the classic V diagram (Figure2 below ). At the base of the V, an engineer or a small group ofengineersimplement the smaller blocks.

Figure2. Theclassive V diagram is a strategy that represents a divide-and-conquerapproach to design and verification

On the right side, the verification strategy begins with theverification of individual blocks. After identifying specific problems,the engineers will integrate their assigned blocks with blocks fromother designers to form larger subsystems. At this point, verificationis performed and the process continues until all blocks have beenintegrated to form a complete system.

For many companies, this point of complete system integration istheir first chance to verify the specification itself, in a processcalled validation. Until then, all verification is performed to ensurethat the implementation matches the specification, rather thanverifying the specification itself.

The shortcomings of today's verification severely strain the classicV methodology. They often lead to late-stage verification, whichbecomes the bottleneck in design for several reasons. First, late-stageverification requires huge amounts of re-verification because verifyingintegrated blocks requires the re-execution of each block'sfunctionality. This becomes an issue when verifying the completesystem.

Second, when bugs and specification problems are identified later inthe design flow, it becomes harder to resolve and fix them. This meansthat each verification iteration becomes longer and costlier as weprogress up the V.

Finding specification problems at the end of thedesign cycle spells a missed window in the product release cycle. Evenin the best situations, it leads to scheduling problems and makes itdifficult to determine when verification is adequate.

Verification crisis
The rising importance of functional verification roots from theincrease in design size and complexity, including the increasingproportion of software and analog components in the design mix.Increased size refers to the enormous number of transistors and gateson an SoC. Today's SoC already consists of tens of millions of gates,raising the potential for errors and complicating the verificationtask.

Increasing complexity means more variety and more of it on a singlechip. The variety of components includes high-performance CPUs,multigigabit I/Os, embedded RAMs, system clock management, analog/mixedsignal, embedded software and dedicated DSPs.

Thus, the interfaces between these components have becomeincreasingly significant to overall functionality and performance. Theincreased presence of on-chip software and analog devices contributesto system complexity, and also challenges traditional designmethodologies.

Figure3.Studies show that SoC interfaces have significantly contributed tochip failures.

Digital engineers must also confront unfamiliar analog issues. Manyhardware designs require the firmware or low-level software to bepresent and functional to verify RTL functionality. This requires thatthe firmware designers play an important role in hardware design andaccount for the detailed interplay of hardware and software.

Studies by Collett International Research Inc. from 2001 and 2003show the growing impact of these interfaces on SoC integration (Figure3, above ). In a 2001 study, 47 percent of all failures wererelated to logicalor functional errors, with mixed-signal interfaces contributing a smallnumber of additional errors.

In a 2003 data, factors contributing tochip failures were logical and functional failures (67 percent), analogflaws (35 percent) and HW/SW interfaces (13 percent) with mixed-signalinterface increasing from 4 percent to 21 percent.

This data also points to a problem with the integration andverification strategy in the V diagram, as more of the problems areturning up in the interfaces between the blocks.

To deal with some of the design complexity issues, many companieshave turned to reuse and incorporate third-party intellectual property(IP). Current researches show that this now accounts for over 50percent of a design or testbench.

To address these issues, we must change the way we do things: we caneither do it better, or do it differently. The first looks at thetooling or the efficiency of the existing process while the secondchanges the methodology for greater effectiveness.

Neither approach is right or wrong, but it is more effective tocombine the elements of both into verification methodologies when thetiming is appropriate. To do it better, we must look at the tools andhow they interact.

We need tools that span the verification domains of simulation,emulation, hardware, software, analog and digital. Moreover, they mustsupport all standard and emerging design languages, including VHDL,Verilog, PSL, CSystemC and SystemVerilog. This is, in part,what wecall scalable verification.

Figure4. A scalable solution consists of various methodologies and toolscovering verification completeness and visibility.

To do it differently, we must examine the process and applyverification earlier in the process. This may involve the creation ofsystem-level testbenches, transaction- level modeling and theexamination of system interfaces as they are created. This requirestools that span the gaps between levels of abstraction and between eachof the system domains such as hardware and software.

Scalability across tools
The required solution must comprise a suiteof tools that work together to form a complete path from HDL simulationto in-circuit emulation. This means that better simulators andemulators speed up verification at all levels of integration.

Scalability across tools is necessary because various types ofverification provide different solutions at different performanceranges. Each solution involves a trade-off between different attributessuch as iteration time, performance, capacity, debug visibility andcost. Even HDL execution engines require various solutions.

Some perform better at the block level, others at the chip or systemlevel. For example, designers who want to verify architecturaldecisions regarding their system will not use an HDL softwaresimulator, but an abstract model, or a transaction level HW/SWenvironment that can provide the necessary information.

Conversely, in-circuit emulation will be inappropriate for verifyingrelatively small sub-blocks of a chip design if an HDL softwaresimulator can accomplish the task. Identifying which tools are optimalfor the given verification task and having them available will enhancedesign productivity. The following are the available technologies forscalable verification:

1) Softwaresimulation is suitable for block-level verificationbecause of its fast turnaround time and debug capabilities.

2) HW/SWco-simulation enables embedded software to be brought intothe verification process and helps accelerate the processor, memory andbus operations. It can also be used as a testbench for hardwareverification.

3) Testbenchacceleration overcomes the performance limitations ofco-simulation by incrementally migrating verification to higher levelsof performance. Support of increased reuse through transactionbasedmethods and high-level verification languages creates a more productivetestbench methodology.

4) Emulation(in-circuit) provides high-capacity andhigh-performance verification within the real system. Emulation assuresdesigners that their chip will function in an actual system.

5) Formalequivalence checking has the capacity and speed to ensurethat modifications made late in the design flow do not change theintended chip behavior.

6) Analog/mixed-signalsimulation enables a multitude of domains tobe verified, ensuring high accuracy and performance. Moreover, notethat highperformance, hardware-assisted or hardware-oriented solutionsare critical to achieve verification completeness in system-levelenvironments.

Aside from the ability to move between tools, it is important tomaximize their productivity. This allows the verification process tostay within a single environment until there is an absolute need tomove to another solution.

Scalability within tools can be demonstrated in different ways. Forexample, in regression testing, numerous tests might need to be runfrequently.

Most companies want this done overnight so problems are discoveredand resolved in the morning or before doing another work. It isunlikely that a single simulator can provide enough performance toaccomplish this large task in a reasonable time.

A simulation farm, which allows many jobs to be queued and executedon any available machine, makes regression testing both easier and morefeasible. If very long runs are included in the regression suite, thenconducting emulation may be necessary.

A single emulator is scalable in itself because its capacity can beadjusted to accommodate various design sizes, provided the gate countfits within the emulator families' maximum capacity limitations. Ifnecessary, capacity can be extended by connecting more than oneemulator together.

Another example is formal equivalence checking. Equivalence-checkingtools reduce the time or frequency of a regression run.However, these tools must be constructed to be memory-efficient, and toenable full-chip verification and regression.

Relying on physical memory in a workstation to solve this is not anoption. Simultaneously, equivalence checking must scale with thecomplexity of the designs, and when more processing power is required,multiple machines can be instructed to work together to have quickerregression runtime.

Another aspect of scalability within tools is particularly importantin emulation. An emulator's performance is fairly constant with designsize. However, if a connection to a logic simulator is required, as inthe case of behavioral testbenches, then performance will quicklydegrade to more traditional simulator speeds.

To solve this, solutions that permit more of the design and/ortestbench to be mapped onto the emulator must be constructed.

This range of solutions must also include high-speed,transaction-level interfaces to ensure efficient connection to theparts that must remain on a workstation. Such requires advancedsynthesis techniques not limited by normal requirements of the designflow, but are built to provide good results for emulators.

Across levels of abstraction Over time, it will be essential to movesome aspects of functional verification to the initial phases of thedesign process. Doing so has several advantages. Models at this stageare faster to write, have higher throughput and can constructivelyinfluence design decisions.

Working at this higher level of abstraction also improves thereusability of the testbenches. With complex SoCs, it is tootimeconsuming and difficult to do everything at the RTL or gate level.

There comes a point where more abstract representations of thedesign become absolutely necessary. This is not just for the design butalso for the testbench.

The creation of these highlevel prototypes in languages such as C,C++ and SystemC allows immediate verification of the architectural orpartitioning decisions being made. Even traditional HDLs such asVerilog, VHDL or SystemVerilog can be used effectively at levels abovethe RTL.

System-level tests can be created and used to verify these abstractmodels. As the system is divided into hardware and software blocks, orthe design hierarchy is refined, verification tools can help with theinterfaces between them.

This allows each of the blocks to progress through time withoutwaiting for all the blocks to reach the same level of abstractionbefore conducting verification.

For a multilevel abstraction strategy to work, it must combine bothtechnology and intellectual property (IP). The models that enabledesigners to both switch between levels of abstraction and tie thelevels together are essential.

Hierarchical verification is achieved using a set of transactors forthe key interfaces of a design. This allows for a mixing of designdescriptions at various levels of abstraction.

The transactors are assembled as a testbench or an environment tocheck if an implementation matches a higher- level model. An advantageof this strategy is that it does not require all the models to exist ata single level of abstraction. This flexibility allows the team to mixand match whatever is available at a given time and provide thenecessary level of resolution relative to execution time.

Transaction-based interfaces can link abstract system models to thedesign, providing an ideal system-level testbench. For example, usingtransaction-based simulation, a team can define a system at a highlevel of abstraction.

They then take single levels, or single blocks, within thathigh-level system definition and – using the IP required for thetransaction to work – substitute them into a more detailedimplementationmodel.

They can run the model on the system as an instant testbench. Theteam immediately has real use of the existing testbenches, resulting ina natural stimulus provided to the block. The result is higher verifi-cation productivity and higher confidence in the design.

Figure5. Engineers create stimuli that they feed into an execution engine sothey can analyze the response produced.

To support a scalable verification solution, debug tools must beintegrated, consistent across levels of abstraction and consistentacross scalability tools.

The goal is to improve the speed at which bugs are identified, thecause tracked down and the problem fixed, thus minimizing feedback timeand iteration loops. Today, over 50 percent of the time of both thedesign and verification teams is taken up with debug, and soimprovements in this area promise a significant impact ontime-to-market.

At the system level, debug is made more complex by mixed levels ofabstraction and by the differing semantics that exist within a system.This becomes even more challenging within a heterogeneous environment,such as hardware and software or digital and analog.

Thus, information must be made available in the correct semanticcontext and at the required level of abstraction.

For example, when debugging software, all of the information aboutthe software program execution is contained within the memory of thehardware, and yet none of it is readily available. Identifying thelocation of the variable is just the start of the solution.

The memory chip information and its relative address within thechip, assuming it is not in a cache or a register, must also bedetermined. Nonetheless, in many cases, the data is not in a logicalorder within the chip because of data or address interleaving.

New debug methodologies
To address some of these challenges, new debug methodologies are beingintroduced such as assertions and checkers. Another area ofconsideration is coverage. Many engineers don't realize that satisfyingcode coverage metrics does not mean that the system has been adequatelyverified.

Additional metrics, such as functional or assertion coverage, mustalso be used to ensure that the design is fully verified. Mostengineers today create stimuli that they feed into an execution engineso they can analyze the response produced (Figure 5, above) .

In many cases, they compare the waveforms between one implementationof the design against a golden model, looking for differences. This isa tedious, hit-and-miss way to debug and leads to many missed mistakes.It is too easy to concentrate on the given problem, missing the factthat something else went wrong or that the current testbench did notreveal the new problem.

Designers must get away from the tedious, repetitive, blind-alleynature of most current debugging methodologies. In the later stages ofthe design process, equivalence checking can be a very powerful tool.Equivalence checking tests implementations against a golden model in aformal method, rather than comparing two sets of waveforms throughsimulation.

New useful test bench components
Recently, some additional testbenchcomponents (Figure 6, below )have matured to the point of usefulness, such as generators,predictors and checkers. These allow test scenarios to be automaticallygenerated and the corresponding results checked for legal behavior.

Figure6. In traditional testbenches, the problem must be propagated to theoutput and must be detected.

Checkers are the most mature of these, and of course, checkers areassertions. Two types of assertions exist: test-dependent andtest-independent. Test-independent assertions are easily inserted intoan existing verification methodology without requiring additional toolsupport while test-dependent assertions coupled with generators requireadditional tools and methodology changes.

It does not end there because several testbench components are notyet well-defined today, such as functional coverage, test plans andverification management.

Though the completion of this testbench transformation is still someyears off, once completed, an executable specification will berealized. But it is not how the industry first predicted it to be. Itwill be used to automate not the design flow, but the verificationflow.

Figure7. Assertions bring the point of detection closer to the point ofproblem injection so that it is not necessary to propagate all effectsto primary outputs.

Using assertions
A testbench is constrained by two independentfactors: controllability and observability.

Controllability is the ability of a testbench to activate a problemin the design by the injection of a stimulus. This is closely relatedto code coverage metrics. Thus, care must be taken with using codecoverage because it does not consider other aspects of the verificationproblem.

In terms of observability, once the problem has been exercised, twothings must happen. First, an undesired effect of the problem must bepropagated to a primary output. Then, the difference between thedesired and undesired effects must be detected.

For most testbenches, the number of primary outputs being verifiedis very small  – so many problems are never noticed . Second,many undesired effects are masked from the primary outputs for much ofthe verification process.Ensuring their propagation to the primaryoutputs can excessively prolong test cases.

This explains why assertions (Figure7, above ) are so powerful. Assertions positivelyaffect observability, providing several benefits. They can identify theprimary causes of what went wrong—rather than secondary or tertiarycauses— making debug much easier and faster.

This is because they can be scattered throughout the design,creating virtual primary outputs, which automatically check for good orbad behavior. As a result, the testbench does not have to propagatethose fault effects all the way to the actual primary outputs, thussimplifying the development of testbenches. Moreover, huge amounts ofdata are verified.

Assertions also perform data checking, making testbenches moreeffective. Once an assertion has been designed and put into a design,it is always operating. In many cases, the assertions are checkingthings that are not the primary reason for the test, and thus they findunexpected problems.

For example, an assertion injected at the module verification stagewill continue to perform its checks throughout the integration phaseand into system-level verification, thus providing much betterverification coverage.

Assertions also make the breadth of the test much broader. Engineersthat use assertionbased verification techniques often find that theirbug-detection rates early on are much higher than when not usingassertions.

This offsets the overhead involved in writing and placingassertions – about a 3 percent time overhead and a 10 percent runtimeoverhead. Companies using assertions claim that a large percentage oftheir total bugs were found by the assertions and that their debug timewas reduced by up to 80 percent.

Assertions can be built into the design, or they can be specifiedindependent of the design and attached to various points in the design.Whether they are internal or external is partly dependent on who iscreating the assertion, such as the designer or an independentverification engineer.

When embedded in the design, they primarily verify an implementationof a specification. When developed externally, they validate theinterpretation of a specification, or sometimes the specificationitself.

Because embedded assertions are in effect executable comments, theycan be placed anywhere a comment may be placed. The advantage is thatthe comment is significantly more worthwhile now because it doessomething active.

This includes comments that describe intended behavior, assumptionsthat the designer has made, or constraints on its intended usage. Thissupports reuse by providing all kinds of information about the expectedbehavior of the design as well as the intentions of the originaldesigner. All third-party IP should come with at least interface andusage assertions built in.

Simulating assertions
Currently, the primary interest in assertions is about how to simulatethem, but this isn't all that can be done with assertions. Assertionsare based on something more fundamental called properties.

Properties can be used for assertions, functional coverage metrics,formal checkers and constraint generators for pseudorandom stimulusgeneration. Properties can be used by both simulators and formalanalysis tools, initiating the merger of both static and dynamicverification techniques into a single methodology.

With the advent of standards in this area, a rapid growth in toolsthat use properties can be expected over the next few years.

Conclusion
Design teams must improve existing methodologies in two primary ways.They should adopt tools that scale across design complexity, and theyshould use multiple levels of abstraction.

A scalable solution enables engineers to do what they do todaybetter, faster and more often, within the same time frame. It makes theverification tools more user-friendly and enables more vectors to bepushed through a design.

Any effective system verification strategy must begin with thepremise that the system is the entire system, and includes things otherthan digital hardware. In other words, a meaningful solution mustaddress analog and provide solutions for software, RTOS awareness andthe environment in which these things must operate, tied together intoa unified solution.

Moreover, design for verification techniques enable more effectivereuse of the verification components created. They also enable earlyverification of the system architecture and of the interfaces betweendesign blocks—as they are created earlier, rather than at the end ofthe process.

This ensures that the block verification performed is not wasted, asspeci- fication interpretation is verified much earlier than before.

New testbench components are maki

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.