Dealing with SoC hardware/software design complexity with scalable verification methods
The problem with today's existing methodologies is that verification is subservient to design. This principle requires a shift in paradigm, especially in designing complex electronic systems.Why?
Functional errors mainly cause design respins. Functional
verification and the process for finding these errors create the
biggest bottleneck in the design flow. And in general, verification
constitutes at least 50 percent of all design activity. Nonetheless,
verification technology is falling behind design and fabrication
capability, widening the verification gap and
limiting designer productivity and design possibilities.
![]() |
| Figure
1. The
verification gap limits design potential and possibilities. |
To close this gap, verification must become an integral part of the overall design methodology. The whole design and verification flow must be structured based on what's good for design engineers and verification engineers. This can impact design partitioning, block sizing and design rules that are taken for granted today.
Another challenge to successful system verification relates to testbenches. Note that an increase in design size exponentially increases verification complexity.
Simulation capability tracks with design size, but the complexity of the testbenches does not, partly due to the dramatic effect that size has on the observability and controllability of the design. Such increases the number of required tests, which may take a long time to implement. Thus, it becomes more difficult to know why things go wrong.
Current methods
Over the years, design and verification
methodologies have been developed to ease significant design
bottlenecks. Most focus on handling complexity and on dividing the
design process into manageable segments. To achieve this, most
companies have adopted some variant of the classic V diagram (Figure
2 below). At the base of the V, an engineer or a small group of
engineers
implement the smaller blocks.
![]() |
| Figure
2. The
classive V diagram is a strategy that represents a divide-and-conquer
approach to design and verification |
On the right side, the verification strategy begins with the verification of individual blocks. After identifying specific problems, the engineers will integrate their assigned blocks with blocks from other designers to form larger subsystems. At this point, verification is performed and the process continues until all blocks have been integrated to form a complete system.
For many companies, this point of complete system integration is their first chance to verify the specification itself, in a process called validation. Until then, all verification is performed to ensure that the implementation matches the specification, rather than verifying the specification itself.
The shortcomings of today's verification severely strain the classic V methodology. They often lead to late-stage verification, which becomes the bottleneck in design for several reasons. First, late-stage verification requires huge amounts of re-verification because verifying integrated blocks requires the re-execution of each block's functionality. This becomes an issue when verifying the complete system.
Second, when bugs and specification problems are identified later in
the design flow, it becomes harder to resolve and fix them. This means
that each verification iteration becomes longer and costlier as we
progress up the V.
Finding specification problems at the end of the design cycle spells a missed window in the product release cycle. Even in the best situations, it leads to scheduling problems and makes it difficult to determine when verification is adequate.
Verification crisis
The rising importance of functional verification roots from the
increase in design size and complexity, including the increasing
proportion of software and analog components in the design mix.
Increased size refers to the enormous number of transistors and gates
on an SoC. Today's SoC already consists of tens of millions of gates,
raising the potential for errors and complicating the verification
task.
Increasing complexity means more variety and more of it on a single chip. The variety of components includes high-performance CPUs, multigigabit I/Os, embedded RAMs, system clock management, analog/mixed signal, embedded software and dedicated DSPs.
Thus, the interfaces between these components have become increasingly significant to overall functionality and performance. The increased presence of on-chip software and analog devices contributes to system complexity, and also challenges traditional design methodologies.
![]() |
| Figure 3.Studies show that SoC interfaces have significantly contributed to chip failures. |
Digital engineers must also confront unfamiliar analog issues. Many hardware designs require the firmware or low-level software to be present and functional to verify RTL functionality. This requires that the firmware designers play an important role in hardware design and account for the detailed interplay of hardware and software.
Studies by Collett International Research Inc. from 2001 and 2003
show the growing impact of these interfaces on SoC integration (Figure
3, above). In a 2001 study, 47 percent of all failures were
related to logical
or functional errors, with mixed-signal interfaces contributing a small
number of additional errors.
In a 2003 data, factors contributing to chip failures were logical and functional failures (67 percent), analog flaws (35 percent) and HW/SW interfaces (13 percent) with mixed-signal interface increasing from 4 percent to 21 percent.
This data also points to a problem with the integration and verification strategy in the V diagram, as more of the problems are turning up in the interfaces between the blocks.
To deal with some of the design complexity issues, many companies have turned to reuse and incorporate third-party intellectual property (IP). Current researches show that this now accounts for over 50 percent of a design or testbench.
To address these issues, we must change the way we do things: we can either do it better, or do it differently. The first looks at the tooling or the efficiency of the existing process while the second changes the methodology for greater effectiveness.
Neither approach is right or wrong, but it is more effective to combine the elements of both into verification methodologies when the timing is appropriate. To do it better, we must look at the tools and how they interact.
We need tools that span the verification domains of simulation, emulation, hardware, software, analog and digital. Moreover, they must support all standard and emerging design languages, including VHDL, Verilog, PSL, C, SystemC and SystemVerilog. This is, in part, what we call scalable verification.
![]() |
| Figure
4. A scalable solution consists of various methodologies and tools
covering verification completeness and visibility. |
To do it differently, we must examine the process and apply verification earlier in the process. This may involve the creation of system-level testbenches, transaction- level modeling and the examination of system interfaces as they are created. This requires tools that span the gaps between levels of abstraction and between each of the system domains such as hardware and software.
Scalability across tools
The required solution must comprise a suite
of tools that work together to form a complete path from HDL simulation
to in-circuit emulation. This means that better simulators and
emulators speed up verification at all levels of integration.
Scalability across tools is necessary because various types of verification provide different solutions at different performance ranges. Each solution involves a trade-off between different attributes such as iteration time, performance, capacity, debug visibility and cost. Even HDL execution engines require various solutions.
Some perform better at the block level, others at the chip or system level. For example, designers who want to verify architectural decisions regarding their system will not use an HDL software simulator, but an abstract model, or a transaction level HW/SW environment that can provide the necessary information.
Conversely, in-circuit emulation will be inappropriate for verifying relatively small sub-blocks of a chip design if an HDL software simulator can accomplish the task. Identifying which tools are optimal for the given verification task and having them available will enhance design productivity. The following are the available technologies for scalable verification:
1) Software simulation is suitable for block-level verification because of its fast turnaround time and debug capabilities.
2) HW/SW co-simulation enables embedded software to be brought into the verification process and helps accelerate the processor, memory and bus operations. It can also be used as a testbench for hardware verification.
3) Testbench acceleration overcomes the performance limitations of co-simulation by incrementally migrating verification to higher levels of performance. Support of increased reuse through transactionbased methods and high-level verification languages creates a more productive testbench methodology.
4) Emulation (in-circuit) provides high-capacity and high-performance verification within the real system. Emulation assures designers that their chip will function in an actual system.
5) Formal equivalence checking has the capacity and speed to ensure that modifications made late in the design flow do not change the intended chip behavior.
6) Analog/mixed-signal simulation enables a multitude of domains to be verified, ensuring high accuracy and performance. Moreover, note that highperformance, hardware-assisted or hardware-oriented solutions are critical to achieve verification completeness in system-level environments.
Aside from the ability to move between tools, it is important to maximize their productivity. This allows the verification process to stay within a single environment until there is an absolute need to move to another solution.
Scalability within tools can be demonstrated in different ways. For example, in regression testing, numerous tests might need to be run frequently.
Most companies want this done overnight so problems are discovered and resolved in the morning or before doing another work. It is unlikely that a single simulator can provide enough performance to accomplish this large task in a reasonable time.
A simulation farm, which allows many jobs to be queued and executed on any available machine, makes regression testing both easier and more feasible. If very long runs are included in the regression suite, then conducting emulation may be necessary.
A single emulator is scalable in itself because its capacity can be adjusted to accommodate various design sizes, provided the gate count fits within the emulator families' maximum capacity limitations. If necessary, capacity can be extended by connecting more than one emulator together.
Another example is formal equivalence checking. Equivalence-checking tools reduce the time or frequency of a regression run. However, these tools must be constructed to be memory-efficient, and to enable full-chip verification and regression.
Relying on physical memory in a workstation to solve this is not an option. Simultaneously, equivalence checking must scale with the complexity of the designs, and when more processing power is required, multiple machines can be instructed to work together to have quicker regression runtime.
Another aspect of scalability within tools is particularly important in emulation. An emulator's performance is fairly constant with design size. However, if a connection to a logic simulator is required, as in the case of behavioral testbenches, then performance will quickly degrade to more traditional simulator speeds.
To solve this, solutions that permit more of the design and/or testbench to be mapped onto the emulator must be constructed.
This range of solutions must also include high-speed, transaction-level interfaces to ensure efficient connection to the parts that must remain on a workstation. Such requires advanced synthesis techniques not limited by normal requirements of the design flow, but are built to provide good results for emulators.
Across levels of abstraction Over time, it will be essential to move some aspects of functional verification to the initial phases of the design process. Doing so has several advantages. Models at this stage are faster to write, have higher throughput and can constructively influence design decisions.
Working at this higher level of abstraction also improves the reusability of the testbenches. With complex SoCs, it is too timeconsuming and difficult to do everything at the RTL or gate level.
There comes a point where more abstract representations of the design become absolutely necessary. This is not just for the design but also for the testbench.
The creation of these highlevel prototypes in languages such as C, C++ and SystemC allows immediate verification of the architectural or partitioning decisions being made. Even traditional HDLs such as Verilog, VHDL or SystemVerilog can be used effectively at levels above the RTL.
System-level tests can be created and used to verify these abstract models. As the system is divided into hardware and software blocks, or the design hierarchy is refined, verification tools can help with the interfaces between them.
This allows each of the blocks to progress through time without waiting for all the blocks to reach the same level of abstraction before conducting verification.
For a multilevel abstraction strategy to work, it must combine both technology and intellectual property (IP). The models that enable designers to both switch between levels of abstraction and tie the levels together are essential.
Hierarchical verification is achieved using a set of transactors for the key interfaces of a design. This allows for a mixing of design descriptions at various levels of abstraction.
The transactors are assembled as a testbench or an environment to check if an implementation matches a higher- level model. An advantage of this strategy is that it does not require all the models to exist at a single level of abstraction. This flexibility allows the team to mix and match whatever is available at a given time and provide the necessary level of resolution relative to execution time.
Transaction-based interfaces can link abstract system models to the design, providing an ideal system-level testbench. For example, using transaction-based simulation, a team can define a system at a high level of abstraction.
They then take single levels, or single blocks, within that high-level system definition and - using the IP required for the transaction to work - substitute them into a more detailed implementation model.
They can run the model on the system as an instant testbench. The team immediately has real use of the existing testbenches, resulting in a natural stimulus provided to the block. The result is higher verifi- cation productivity and higher confidence in the design.
![]() |
| Figure 5. Engineers create stimuli that they feed into an execution engine so they can analyze the response produced. |
To support a scalable verification solution, debug tools must be integrated, consistent across levels of abstraction and consistent across scalability tools.
The goal is to improve the speed at which bugs are identified, the cause tracked down and the problem fixed, thus minimizing feedback time and iteration loops. Today, over 50 percent of the time of both the design and verification teams is taken up with debug, and so improvements in this area promise a significant impact on time-to-market.
At the system level, debug is made more complex by mixed levels of abstraction and by the differing semantics that exist within a system. This becomes even more challenging within a heterogeneous environment, such as hardware and software or digital and analog.
Thus, information must be made available in the correct semantic context and at the required level of abstraction.
For example, when debugging software, all of the information about the software program execution is contained within the memory of the hardware, and yet none of it is readily available. Identifying the location of the variable is just the start of the solution.
The memory chip information and its relative address within the chip, assuming it is not in a cache or a register, must also be determined. Nonetheless, in many cases, the data is not in a logical order within the chip because of data or address interleaving.
New debug methodologies
To address some of these challenges, new debug methodologies are being
introduced such as assertions and checkers. Another area of
consideration is coverage. Many engineers don't realize that satisfying
code coverage metrics does not mean that the system has been adequately
verified.
Additional metrics, such as functional or assertion coverage, must also be used to ensure that the design is fully verified. Most engineers today create stimuli that they feed into an execution engine so they can analyze the response produced (Figure 5, above).
In many cases, they compare the waveforms between one implementation of the design against a golden model, looking for differences. This is a tedious, hit-and-miss way to debug and leads to many missed mistakes. It is too easy to concentrate on the given problem, missing the fact that something else went wrong or that the current testbench did not reveal the new problem.
Designers must get away from the tedious, repetitive, blind-alley nature of most current debugging methodologies. In the later stages of the design process, equivalence checking can be a very powerful tool. Equivalence checking tests implementations against a golden model in a formal method, rather than comparing two sets of waveforms through simulation.
New useful test bench components
Recently, some additional testbench
components (Figure 6, below)
have matured to the point of usefulness, such as generators,
predictors and checkers. These allow test scenarios to be automatically
generated and the corresponding results checked for legal behavior.
![]() |
| Figure 6. In traditional testbenches, the problem must be propagated to the output and must be detected. |
Checkers are the most mature of these, and of course, checkers are assertions. Two types of assertions exist: test-dependent and test-independent. Test-independent assertions are easily inserted into an existing verification methodology without requiring additional tool support while test-dependent assertions coupled with generators require additional tools and methodology changes.
It does not end there because several testbench components are not yet well-defined today, such as functional coverage, test plans and verification management.
Though the completion of this testbench transformation is still some years off, once completed, an executable specification will be realized. But it is not how the industry first predicted it to be. It will be used to automate not the design flow, but the verification flow.
![]() |
| Figure 7. Assertions bring the point of detection closer to the point of problem injection so that it is not necessary to propagate all effects to primary outputs. |
Using assertions
A testbench is constrained by two independent
factors: controllability and observability.
Controllability is the ability of a testbench to activate a problem in the design by the injection of a stimulus. This is closely related to code coverage metrics. Thus, care must be taken with using code coverage because it does not consider other aspects of the verification problem.
In terms of observability, once the problem has been exercised, two things must happen. First, an undesired effect of the problem must be propagated to a primary output. Then, the difference between the desired and undesired effects must be detected.
For most testbenches, the number of primary outputs being verified is very small - so many problems are never noticed. Second, many undesired effects are masked from the primary outputs for much of the verification process.Ensuring their propagation to the primary outputs can excessively prolong test cases.
This explains why assertions (Figure 7, above) are so powerful. Assertions positively affect observability, providing several benefits. They can identify the primary causes of what went wrong—rather than secondary or tertiary causes— making debug much easier and faster.This is because they can be scattered throughout the design, creating virtual primary outputs, which automatically check for good or bad behavior. As a result, the testbench does not have to propagate those fault effects all the way to the actual primary outputs, thus simplifying the development of testbenches. Moreover, huge amounts of data are verified.
Assertions also perform data checking, making testbenches more effective. Once an assertion has been designed and put into a design, it is always operating. In many cases, the assertions are checking things that are not the primary reason for the test, and thus they find unexpected problems.
For example, an assertion injected at the module verification stage will continue to perform its checks throughout the integration phase and into system-level verification, thus providing much better verification coverage.
Assertions also make the breadth of the test much broader. Engineers that use assertionbased verification techniques often find that their bug-detection rates early on are much higher than when not using assertions.
This offsets the overhead involved in writing and placing assertions - about a 3 percent time overhead and a 10 percent runtime overhead. Companies using assertions claim that a large percentage of their total bugs were found by the assertions and that their debug time was reduced by up to 80 percent.
Assertions can be built into the design, or they can be specified independent of the design and attached to various points in the design. Whether they are internal or external is partly dependent on who is creating the assertion, such as the designer or an independent verification engineer.
When embedded in the design, they primarily verify an implementation of a specification. When developed externally, they validate the interpretation of a specification, or sometimes the specification itself.
Because embedded assertions are in effect executable comments, they can be placed anywhere a comment may be placed. The advantage is that the comment is significantly more worthwhile now because it does something active.
This includes comments that describe intended behavior, assumptions that the designer has made, or constraints on its intended usage. This supports reuse by providing all kinds of information about the expected behavior of the design as well as the intentions of the original designer. All third-party IP should come with at least interface and usage assertions built in.
Simulating assertions
Currently, the primary interest in assertions is about how to simulate
them, but this isn't all that can be done with assertions. Assertions
are based on something more fundamental called properties.
Properties can be used for assertions, functional coverage metrics, formal checkers and constraint generators for pseudorandom stimulus generation. Properties can be used by both simulators and formal analysis tools, initiating the merger of both static and dynamic verification techniques into a single methodology.
With the advent of standards in this area, a rapid growth in tools that use properties can be expected over the next few years.
Conclusion
Design teams must improve existing methodologies in two primary ways.
They should adopt tools that scale across design complexity, and they
should use multiple levels of abstraction.
A scalable solution enables engineers to do what they do today better, faster and more often, within the same time frame. It makes the verification tools more user-friendly and enables more vectors to be pushed through a design.
Any effective system verification strategy must begin with the premise that the system is the entire system, and includes things other than digital hardware. In other words, a meaningful solution must address analog and provide solutions for software, RTOS awareness and the environment in which these things must operate, tied together into a unified solution.
Moreover, design for verification techniques enable more effective reuse of the verification components created. They also enable early verification of the system architecture and of the interfaces between design blocks—as they are created earlier, rather than at the end of the process.
This ensures that the block verification performed is not wasted, as speci- fication interpretation is verified much earlier than before.
New testbench components are making their way into verification methodologies today, and the use of assertions can dramatically affect the quality and speed of verification.
Furthermore, a number of newer testbench components are emerging. All of these new components will be driven by properties. This is where the future lies, and that future is beginning to look very bright. This automated, properties- based verification approach will deliver the boost in performance necessary to narrow the verification gap.
Brian Bailey is currently an independent consultant (Brian Bailey Consulting). This article is an adaptation of a paper he wrote when he was chief technologist in the Design Verification and Test Division at Mentor Graphics Corp.
To read a PDF version of this
story, go to Address
verification issues with scalable methods.









Loading comments... Write a comment