Hardware/Software Verification enters the "atomic age": Part 2 - Embedded.com

Hardware/Software Verification enters the “atomic age”: Part 2

Today's system designs are increasingly complex, especially in thedimension of concurrency, on both the hardware and software sides. Aspointed out in Part 1, the industry is just nowseriously grappling with how to write parallel software on newmulti-threaded and multicore hardware architectures that keep growingin size and complexity.

Atomic transactions are emergingas a powerful new tool with all of these efforts and enabling a newatomic age for hardware and software design. Rishiyur Nikhil and Arvindhave discussed the implications of atomicity in software design in an earlierseries on Embedded.com. . Here I will confine the topic tothe impact on hardware verification methodology.

When verification teams are the downstream beneficiaries of designsbuilt using atomic transactions instead of traditional, lower-levelconcurrency mechanisms, there are implications both to themethodologies that they might use and to the benefits that they mightexperience.

As an example, let's look at the verification of hardware designedwith BSV(Bluespec SystemVerilog). Enabled through atomic transactions, BSVhas module interface semantics and guards that ensure that othermodules will properly interface with them. Once the interface isverified at the module unit level, it cannot be improperly interfacedwith when integrated within a larger design.

BSV is explicitly parallel and based on atomic transactions, thehighest level of abstraction for specifying concurrent behavior, soprevalent in System-on-Chips. Its atomic transactions encompasscommunication protocols across module boundaries, enabling robustscaling to large systems and robust IP reuse. For this reason, BSV'satomic transactions are composable ” easily scaling to larger, morecomplex designs without a corresponding increase in complexity.

Figure1: BSV compilation stages

Starting at a high-level model, implementation, transactor ortestbench designed with these atomic transactions, the BSV compiler (Figure 1, above ) can synthesizeefficient Verilog RTL for use in anemulator, FPGA or ASIC. Or, it can generate a fast SystemC simulation object.Composable atomic transactions for hardware enable:

* Design by refinement from executable specification to finalimplementation
* Architectural exploration with early architectural feedback
* Early fast executable models for software development
* Synthesizable models and verification

What are the implications to verification methodology? As anexample, let's look at the verification of hardware designed with BSV.Even when not using BSV directly, for synthesizable test benches andreference models, verification derives many benefits when the designunder test (DUT) has been designed with BSV. These benefits include thefollowing:

1) With much simpler, morepowerful concurrency management, atomic transaction-based designs havemany fewer bugs. This translates into much less time spent diagnosingand debugging bugs and more time spent waiting for re-work, performingre-verification, and developing additional tests. And, since designsare stable much earlier, FPGAs and emulators can be leveraged muchearlier in the design cycle.

2) Written at asignificantly higher level, these designs result in earlier codeavailability and code that is easier to interconnect and that is moremature. This translates into earlier design and verificationintegration, faster resolution of integration issues, and a muchearlier start to real verification.

3) Atomic transactionsenable more IP reuse, which eliminates verification by leveragingprevious verification efforts.

4) As these designs arewritten operation-centrically as opposed to state-centric, they aresimpler to understand and read like specifications. This makes it mucheasier to write tests targeted at the hardware. As tests and executablespecifications deal with the same concepts, it makes it much easier toget the tests correct and complete, including corner cases.

5) With formal interfacesemantics that ensure proper module instantiation, there is less toverify at the system level and better leverage of unit levelverification. This enables a strategy where you can verify moduleinterface behavior once, not at every instantiation.

Let's specifically look at the final point in detail and explore howthis changes how verification may approach this aspect of a design.

Figure2: BSV compilation with the Verilog back end

As mentioned earlier, BSV has module interface semantics and guardsthat ensure that other modules will properly interface with them. Thiscapability enables a completely different attack plan by a verificationteam, which usually focuses on system level verification, and less soon blocks or sub-IP.

Atomic-transaction-based interface semantics and guards allow astrategy of attacking the design inside out ” and focusing on units andre-usable library elements first, and then focusing on the system.

This will be possible because unit level interface testing is 100%leverageable at higher levels of design hierarchy. Instead ofverification growing exponentially with size/complexity, it might growmore linearly.

Low-Level Interface SemanticsCompound Verification Efforts
SystemC, SystemVerilog, Verilog or VHDL module interfaces don'thave these properties. Concurrency management, of shared resources, inmodule behavior and with module interfaces, is manual and low-level.

Because of this, even if you focus a lot of verification effort at alower level block, this validation work only applies under theconditions where the block is properly being used ” that is, all of thewritten, ad hoc interface contracts have been correctly observed.

These conditions cannot be guaranteed upon any future instantiationof this same block. Instead, only a fraction of a block's functionalitycan be guaranteed as a block's behavior is dependent on internal logicas well as context-sensitive external logic.

The FIFO block is agreat illustration of this, as it is a small, pervasive and easilyunderstandable block. There are many pre-designed and pre-validatedFIFOs available from existing libraries and IP vendors. But, what arethe real benefits of having this work completed?

Sure, the design work is done, but what about its functionality?Every time someone uses one of these FIFOs, they might mis-use it, eventhough this IP has been pre-validated. For example, some FIFOs allowsimultaneous enqueuing and dequeuing when the FIFO is full, but notempty.

The only practical way for the designer using this FIFO tounderstand use details like this is by diligently reading thespecification — and then the designer has to design logic around theFIFO and get it right. With RTL semantics, proper functionality is notonly about implementation, but about use, which cannot be guaranteed,even if a block gets used over and over like a FIFO.

If one of these FIFOs is used 1000 times in a design, only a SMALLpercentage of the verification work related to the FIFO has been saved,as every instantiation could have the following types of errors (and,each instantiation could have a different problem!):

* Mis-connected interfaces
* Enqueue when FULL
* Dequeue when EMPTY
* Simultaneous enqueue and dequeue when EMPTY, but okay when not EMPTY
* Grabbing the data on the wrong cycle
* Not maintaining the enqueue data on the right cycle
* If more than one block uses the FIFO, any of the above errorscompounded by poor arbitration

And, compounding the problem is that all of these types of errorsare very hard to uncover and identify when a block like this is deepwithin another module. Imagine having one of these FIFOs deep within adesign ” how do you get it into all the corner cases? Now imaginehaving 1000 of these FIFOs.

These issues occur at every interface point in a design. This isbecause designers face these issues every time they use an IP block.For each use of every IP block, the designer must understand the portprotocols assumed by the module designer (assuming they are fullydocumented).

Since each module's port behavior is designed from scratch by themodule designer, the IP user has to cope with all the different stylesand conventions of different IP designers, i.e. understanding each IPblock's port behavior is a completely fresh task. And, there is a lotof control logic that must be implemented properly even if thisprotocol is well understood.

This results in a major lack of scalability in verification. Thedesigner of module B that uses an IP module A must not only think aboutthe functionality of B, which is the focus of his attention, but hemust also think about ways in which B might possibly violate the portbehavior contracts of A.

And, verification tests must be devised to ensure that suchviolations never happen. This must happen for every instance of A, evenwhen there are hundreds of instantiations these verification checksmust be devised and performed separately on each of thoseinstantiations.

Moreover, for the same corner case, each instance will require auniquely designed test to drive it into that situation, as the contextsof each instance will be unique. This is very hard to do at the systemlevel.

Because so many errors happen when things are instantiated,verification teams must focus on system level functionality and cannotspend much of their time making sure that lower blocks are properlyimplemented.

Put another way, there is little benefit or leverage in verifyingthe sub-components in a design because how a sub-component is used hassuch a large influence on whether it works properly. IP re-use hassuffered not just because of poor quality, but because properlyinstantiating and controlling a piece of IP is so hard.

Atomic Interface Semantics ReduceVerification Efforts
By depending on the compositional power of atomic transactions, averification team can verify a block like a FIFO module not only forinternal functionality but for proper use during every futureinstantiation — with this, there is true leverage in verificationeffort at the block level.

Atomic transactions not only guarantee the correct interfacing ofmodules at every instance, but they also allow the correct controllogic to be generated automatically to handle all the corner cases.

This will allow IP re-use to take off and deliver tremendous impact.RTL just doesn't offer a reasonable way to do this, however ” there areno semantics for defining how an interface may be properly used. Everyinstantiation of a BSV-based IP block, even if there are thousands, isguaranteed to work properly.

Figure3. Configuring a FIFO module with BSV

Let's look at how a FIFO module (Figure3 above ) is much different with atomic transactions. With BSV,instead of port lists, interfaces are defined usingatomic-transaction-based interface methods, as shown below, using somesmall extensions to the notation of SystemVerilog:

interface   FIFOBuf#    (x_t);
    methodAction    enq    (x_t x);
    methodActionValue#(x_t)     deq ();
    method Action            clear ();
endinterface

[Note: the (type x_t) construct isjust SystemVerilog's notation for type parameterization (also known aspolymorphism, or genericity), i.e., x_t is a type variable representingthe type of items stored in the FIFO. ]

The enq method encapsulates multiple ports of an ordinary FIFO: theinput data bus (whose width depends on the particular data type towhich the generic type x_t is instantiated), the FULL signal(ENQ_READY), and the ENQ_ENABLE signal. Similarly, the deq methodencapsulates multiple ports: the output data bus, the EMPTY signal(DEQ_READY) and the DEQ_ENABLE signal.

Now interfacing is easy. If you want to enqueue, you just callenq(). If you want to dequeue, you just call deq(). The compilerensures that you are properly connected and that the correct controllogic is generated so that you never hit any of the hazards outlinedabove for FIFOs.

A client module that uses the FIFO contains atomic transactions(Rules) that operate the enq and deq methods, as in the example below.

module mkClient ();
    instantiate fifo

    rule upstream ( cond1);
        other actions
        fifo.enq (expr1);
    endrule

    rule downstream( cond1 );
       x <- fifo.deq ();
        other actions
    endrule
endmodule

Note that when enq is done in this example, there is no addedcontrol logic to determine whether or not the FIFO is full or empty, orwhether there is another simultaneous deq being done somewhere else,and under what conditions. The design is simpler to get right – and thecompiler guarantees that there will be no hazards, assuming the FIFOhas been verified at the unit level.

Each rule is an atomic transaction, outlining one or more actionsexecuted atomically. Each rule has an explicit condition, depictedabove as the expressions cond1 and cond2. These are pure combinationalboolean expressions. Each rule also contains one or more actions thatcan be executed atomically only if the rule condition is true.

For example, the upstream rule contains an action that enqueues thevalue of expression expr1 into the FIFO, and the downstream rulecontains an action that dequeues an item x from the FIFO.

The conditions of all methods operated by a rule are incorporatedinto the overall condition of the rule. The ENQ_READY signal is “AND”edwith cond1 to determine the overall condition of the upstream rule. TheDEQ_READY signal is “AND”ed with cond2 to determine the overallcondition of the downstream rule.

A rule can only fire (execute) if all its conditions permit. When itfires, all its actions, including all the actions in all the methodsthat it operates, are executed simultaneously as one composite atomicaction. Thus, the upstream rule can only fire if ENQ_READY is true, andthen the enqueueing action becomes part of the overall atomic action ofthe rule. When the rule fires, the enqueued data is driven andENQ_ENABLE is asserted.

The condition of a method or a rule is necessary, but notsufficient, for a rule to fire. In particular, since rules can shareresources (such as the FIFO above), simultaneous firing might not bepossible while maintaining atomicity, i.e., if simultaneous firingwould lead to inconsistent states. The compiler emits scheduling logicto ensure that simultaneous firing is only possible if it maintainsatomicity.

Compiling the FIFO
When compiling a FIFO implementation, the compiler performs asystematic analysis that infers whether the enq and deq methods can beoperated simultaneously safely, and under what conditions. Note,different FIFO designs may or may not permit such simultaneousoperation.

This interface information is recorded by the compiler with the FIFOimplementation. Then, when compiling mkClient, the compiler uses thisinformation to introduce suitable control logic in mkClient toguarantee that the upstream and the downstream rules can firesimultaneously only when conditions permit them to do so safely.

Suppose mkClient had two processes that needed to enqueue into theFIFO, i.e., suppose it had another rule, like this:

rule upstream_also (… cond_also…);
        …other actions …
       fifo.enq (expr_also);
endrule

The above discussion shows how interface method semantics fitorganically into the semantics of the rules that invoke them. Interfacemethod conditions are integrated into rule conditions.

Interface method actions become part of rule actions, with the samesemantics of atomicity. It is for these reasons that we say thatinterface methods are atomic transaction-based (Rule-based), i.e.,methods are simply parts of rules, and a rule can be viewed as acomposition of the methods it operates.

In particular, the compiler automatically derives and enforces allof the interface's contract requirements. The compiler, byconstruction, ensures that the design meets all the contractrequirements wherever the FIFO is instantiated, as in mkClient.

It is therefore impossible for mkClient to drive the FIFO into aninconsistent state. Similarly, in addition to mkClient, the design maycontain hundreds of other instances of the FIFO module, but in eachcase the compiler ensures that the FIFO's contract is met.

Further, these assurances are transitive, so that mkClient can trulybe viewed as a black box. Its interface contract, in turn, isautomatically derived by the compiler. If mkClient is itselfinstantiated repeatedly, the compiler ensures that its contract is metin each case.

Thus, we are guaranteed that none of them can drive their FIFOinstances into inconsistent states, no matter what their environments.In other words, the interface contracts for mkClient, in turn, ensurethat the contracts for the FIFOS are met. All these assurances areensured statically by the compiler, before simulation.

Thus, these properties of atomic-transaction-based interfacesdramatically simplify the verification problem. A whole host offunctional errors that are very common in RTL designs—signaling errors,signal sampling errors, race conditions, and so on—almost all of whichcan be traced to a failure of atomicity, are eliminated, byconstruction, when the behavior is expressed in terms of rules andrule-based interfaces.

With stronger interface semantics and fewer bugs due to a simplerconcurrency model, designs based on atomic transactions should enableverification teams to have an improved experience in severaldimensions:

* Faster integration ” asoutlined above, the possibility of more powerful interface semanticsavoids many of the integration bugs that are based around interfaceprotocol mismatches between modules.

* Less debugging ” withfewer integration bugs, and with a better concurrency model, aggregatebug counts should decrease. This means that there should be less timeisolating and debugging issues.

* Earlier use of FPGA/emulation ” with synthesizable, transaction-level designs and fewer bugs, designsare available and stable much earlier.

* More IP reuse whichleverages more verification.

* Easier to write tests for ” executable specifications make it easier to get the tests correct andcomplete, including corner cases.

In addition to the above, less verification by leveraging the formalinterface semantics ensure proper module instantiation. This enables astrategy where you can verify module interface behavior once, not atevery instantiation.

To read Part 1, go to: “The basics of atomictransactions.”

George Harper is vice president ofmarketing at Bluespec and hasmore than 15 years of experience in the semiconductor industry. Heholds a BSEE and MSEE in electrical engineering from StanfordUniversity and an MBA from Harvard University. He can be reached atgharper@bluespec.com .

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.