Hardware/Software Verification enters the "atomic age": Part 2
Low-Level Interface Semantics Compound Verification EffortsSystemC, SystemVerilog, Verilog or VHDL module interfaces don't have these properties. Concurrency management, of shared resources, in module behavior and with module interfaces, is manual and low-level.
Because of this, even if you focus a lot of verification effort at a lower level block, this validation work only applies under the conditions where the block is properly being used " that is, all of the written, ad hoc interface contracts have been correctly observed.
These conditions cannot be guaranteed upon any future instantiation of this same block. Instead, only a fraction of a block's functionality can be guaranteed as a block's behavior is dependent on internal logic as well as context-sensitive external logic.
The FIFO block is a great illustration of this, as it is a small, pervasive and easily understandable block. There are many pre-designed and pre-validated FIFOs available from existing libraries and IP vendors. But, what are the real benefits of having this work completed?
Sure, the design work is done, but what about its functionality? Every time someone uses one of these FIFOs, they might mis-use it, even though this IP has been pre-validated. For example, some FIFOs allow simultaneous enqueuing and dequeuing when the FIFO is full, but not empty.
The only practical way for the designer using this FIFO to understand use details like this is by diligently reading the specification -- and then the designer has to design logic around the FIFO and get it right. With RTL semantics, proper functionality is not only about implementation, but about use, which cannot be guaranteed, even if a block gets used over and over like a FIFO.
If one of these FIFOs is used 1000 times in a design, only a SMALL percentage of the verification work related to the FIFO has been saved, as every instantiation could have the following types of errors (and, each instantiation could have a different problem!):
* Mis-connected interfaces
* Enqueue when FULL
* Dequeue when EMPTY
* Simultaneous enqueue and dequeue when EMPTY, but okay when not EMPTY
* Grabbing the data on the wrong cycle
* Not maintaining the enqueue data on the right cycle
* If more than one block uses the FIFO, any of the above errors
compounded by poor arbitration
And, compounding the problem is that all of these types of errors are very hard to uncover and identify when a block like this is deep within another module. Imagine having one of these FIFOs deep within a design " how do you get it into all the corner cases? Now imagine having 1000 of these FIFOs.
These issues occur at every interface point in a design. This is because designers face these issues every time they use an IP block. For each use of every IP block, the designer must understand the port protocols assumed by the module designer (assuming they are fully documented).
Since each module's port behavior is designed from scratch by the module designer, the IP user has to cope with all the different styles and conventions of different IP designers, i.e. understanding each IP block's port behavior is a completely fresh task. And, there is a lot of control logic that must be implemented properly even if this protocol is well understood.
This results in a major lack of scalability in verification. The designer of module B that uses an IP module A must not only think about the functionality of B, which is the focus of his attention, but he must also think about ways in which B might possibly violate the port behavior contracts of A.
And, verification tests must be devised to ensure that such violations never happen. This must happen for every instance of A, even when there are hundreds of instantiations these verification checks must be devised and performed separately on each of those instantiations.
Moreover, for the same corner case, each instance will require a uniquely designed test to drive it into that situation, as the contexts of each instance will be unique. This is very hard to do at the system level.
Because so many errors happen when things are instantiated, verification teams must focus on system level functionality and cannot spend much of their time making sure that lower blocks are properly implemented.
Put another way, there is little benefit or leverage in verifying the sub-components in a design because how a sub-component is used has such a large influence on whether it works properly. IP re-use has suffered not just because of poor quality, but because properly instantiating and controlling a piece of IP is so hard.


Loading comments... Write a comment