HW/SW co-verification basics: Part 4 - Co-verification metrics
Many metrics can be used to determine which of the co-verification methods discussed in Part 1, Part 2 and Part 3 are best for a particular project including: 1) Performance (speed); 2) Accuracy; 3) Synchronization; 4) Type of software to be verified; 5) Ability to do hardware debugging (visibility); 6) Ability to do performance analysis; 7) Specific versus general-purpose solutions; 8) Software only (simulated hardware) versus hardware methods; 9) Time to integrate software debug tools; 10) Pre-silicon compared to post-silicon; and, finally, 11) Time to create & integrate models: bus interface, cache, peripherals, RTOS.
Co-verification performance
It is common to see numbers thrown out about cycles/sec and instructions/sec related to co-verification. While some projects may indeed achieve very high performance using co-verification, it is difficult to predict performance of a co-verification solution.
Of course, every vendor will say that performance is "design dependent," but with a good understanding of co-verification methods it is possible to get a good feel for what kind of performance can be achieved.
The general unpredictability is a result of two factors; first, many co-verification methods use a dual-process architecture to execute hardware and software. Second, the size of the design, the level of detail of the simulation, and the performance of the hardware verification platform results in very different performance levels.
Verification Accuracy
While performance issues are the number one objection to co-verification from software engineers, accuracy is the number one concern of hardware engineers. Some common questions to think about when evaluating co-verification accuracy are listed here. The key to successful hardware/software co-verification is the microprocessor model.
1) How is the model verified to guarantee it behaves identically to the device silicon?
Software models can be verified by using manufacturing test vectors from the microprocessor vendor or running a side-by-side comparison with the microprocessor RTL design database. Metrics such as code coverage can also provide information about software model testing.
Alternatively, not all co-verification techniques rely on separately developed models. Techniques based on RTL code for the CPU can eliminate this question altogether. Make sure the model comes with a documented verification plan. Anybody can make a model, but the effort required to make a good model should not be underestimated.
2) Does the model contain complete functionality including all peripherals?
Using bus functional models was a feasible modeling method before so many peripherals were integrated with the microprocessor. For chips with high integration, It becomes very difficult to model all of the peripherals. Even if a device appears to have no integrated peripherals, look for things like cache controllers and write buffers.
3) Is the model cycle accurate?
Do all parts of the model take into account the internal clock of the microprocessor`? This includes things such as the microprocessor pipeline timing and the correlation of bus transaction times with instruction execution. This may or may not be necessary depending on the goals of co-verification. A noncycle accurate model can run at a higher speed and more may be more suitable for software development.
4) Are all features of the bus protocol modeled?
Many microprocessors use more complex bus protocols to improve performance. Techniques such as bus pipelining, bursting, out-of-order transaction completion, write posting and write reordering are usually a source of design errors. Simple read software. Second, the size of the design, the level of detail of the simulation, and the performance of the hardware verification platform results in very different performance levels.


Loading comments... Write a comment