A Simple New Approach to Hardware Software Co-Verification - Embedded.com

A Simple New Approach to Hardware Software Co-Verification

Coverage-driven verification (CDV) has generated remarkable interest in recent years. Because of itsenormously comprehensive capabilities, more and more verification teamsare now relying on the CDV approach.

However, implementing a coverage-driven verification environment ina system-level environment requires developing a sequence library,which has proven to be a time-consuming task.

To configure, or reconfigure the library, requires planning for andtesting a huge number of interactions with the various portions of thedevice under test. The challenge poses a significant bottleneck in theverification process of a complex system or device.

A recent project of one of our customers provides a great example.One of their SoCs for use inan HDTV system presented someserious complexity. The SoC included a CPU subsystem, memory manager,and multiple bus interconnects. To test the subsystem, we were lookingat more than 6000 register fields – a potential verification bottleneckto say the least.

Creating a layered verificationenvironment
My team was able to develop and implement a significant shortcut toverify the large number of instances, one that delivered solid evidencethat hardware/software co-verification has evolved from a good idea toa reality.

Making use of Cadence's PlantoClosure methodology, we put together a layered verificationenvironment specifically for the customers.

The Plan-to-Closure Methodology is a complete guide for performingpowerful verification of blocks, clusters, complex chips and SoCs. Itprovides documented best practices, golden example, drop-in utilitiesand training material.

The methodology is broken into two separate versions specificallytailored for design teams versus enterprise multi-specialist teams.Each version of the methodology may be used stand alone or incombination with the other to provide one comprehensive and integratedmethodology.

The Plan-to-Closure approach incorporates two important elements -the e Reuse Methodology (eRM) and the System Verification Methodology(sVM) – which we used extensively in building our verificationenvironment.

eRM ensures reusable e Verification Components (eVC) and sVM offers”cookbook” ideas on how to plug together the reusable verificationcomponents to a system-level verification environment similar to theone shown in the Figure below. Also, the register eVC is part of sVM.

Figure1. Layered Verification Environment

Figure 1, above , shows thelayered verification environment and it consists of the followinglayers:

– Hardware layer
– Hardware interface layer
– Register layer
– Software layer
– Application layer (Virtual sequence driver)

The advantage of the layered approach is that you can easily replacea layer with another representation of the layer or you can add a newlayer. In our case we added the software layer on top of the layers youusually have in a pure hardware verification environment.

In the environment, integrated into the verification plan, wedeveloped a preconfigured piece of verificationIP(VIP) at thesubsystem-level or e Verification component.

In reality VIP is just another name for a reusable verificationcomponent. An eRM compliant verification component typically consistsof a sequence driver and bus functional model (stimuli generation part)and a monitor (including checking and functional coverage).

In our case the components included a bus hardware interface, aninterrupt, a register package, and a new software component for theembedded software under verification based on a software extensioncalled a generic software adapter.

Our co-verification environment also included a virtual sequencedriver to coordinate all the layers. In other words, the environmentprovided a software infrastructure to randomly generate sequences of Croutine calls. We were suddenly much more capable of achievingfunctional coverage of the hardware and embedded software.

The main advantages of these methodologies are increased reuse andhigher levels of automation. From a reuse standpoint we were able todevelop and use reusable verification IP similar to design IP.

And from an automation standpoint we were able to replace theprevious directed testing approach with automatic stimuli generation,functional coverage measurement and self-checking verificationenvironments.

We were also able to randomly generate sequences of C routine callsand which enabled testbench automation in the form of automaticgeneration of a huge number of testcases for the C software.

A higher level of abstraction, lesslow-level writing
The main idea at work in this story: we're doing less work at the lowlevel. We're using the automation built into the software and thesystem-level C code to do the tedious configuration work—instead ofwriting our own register sequences for verification purposes. We'reessentially raising the level of abstraction by using enterprise widesystem-level automation capabilities.

In the past a verification engineer wrote all the code to configureand to test the device under test. Later after hardware verificationwas finished and the first silicon was available the software engineersstarted to write their software. But the tasks of the software andverification engineers are very similar. The software engineer also hadto write code to configure it and he also has to write some basicdevice tests.

Today's SOC are too complex for such a separated approach. When thefirst silicon is available it takes almost one year to get the softwareready. We needed to avoid this double effort between verification andsoftware engineers.

With our environment the software engineer could develop hissoftware well before the first silicon was available. The verificationengineer could benefit from the software engineers work by using theactual software instead of writing his own code just for verificationpurpose.

In other words the verification engineer could apply advancedverification methodologies to the software. These advanced verificationmethodologies are usual for hardware verification, but not for softwareverification. In software verification there is still the old directedverification approach in use with all the disadvantages.

The impact to the embedded software verification is the benefit fromthe advanced verification methodology. It's an exciting advancementbecause we're diving into a new world of enterprisesystem level verification (ESL) with embedded software verification that takes advantage of provenverification methodologies, VIP, and testbench automation.

Layered Approach Saves Time
In the standard flow, the testbench developer would write registersequences to configure the DUT. Later, the software guys would have tomore or less do the same job. However, given our new layeredverification environment, we use the same C routines to configure theDUV and we don't waste any time writing the same routines again.

For example: I might write “doSetColorLUT c_routine .” Instead ofwriting the same function on an instruction level, I write one line ofcode where otherwise I would have had to write 512.

In the past it was not common to add the software layer to ahardware module or sub-system environment. This means a test writer hadto write his test on hardware level e.g. CPU instruction level. In theexample we have used in this article he would have to write 512 linesof code to configure a color lookup table:

LCL 0value
LCH 0 value

LCL 255 value
LCH 255 value

But after adding the software layer to the verification environmenthe could achieve exactly the same task by writing just one C routinecall (SetColorLUT() )

The tasks are not only easier, they've helped integrate andcoordinate our hardware and software-related verification activities.Now the software engineers can start to develop their software on thesimulated hardware RTL code using the layered verification environment.In the past the software development started after the first siliconsamples or a FPGA prototype is available which is typically very latein the project phase.

The flexibility of the genericsoftware adaptor
The generic software adaptor offers an approach to verification that isremarkably flexible. It's simply no different if you want to do a Croutine, or a bus transfer, or any register read or write. You employthe same test interface you'd use for a normal hardware test withinyour test bench automation environment.

The eRM provides a generic sequence framework. The framework is theinterface for the test writers to the verification environment. Becausethe framework is generic there is almost no difference for the testwriter to define a bus transfer, a register or a C routine sequence asshown in the e code examples below:

extend ANYbus_transfer_sequence {
body()@driver.clock is only {
do bus_transfer keeping {.address in [0x0..0xffff];};
};
};

extend ANYregister_sequence {
body()@driver.clock is only {
do reg_read keeping {.address in [0x0..0xffff];};
};
};

extendANY c_routine_sequence {
body()@driver.clock is only {
do setApplicationMode keeping{.appMode == MODE_1;};
};
};

Any trained engineer who's familiar with writing test benches onhardware is now perfectly able to write the testing environment forsoftware. There's no need to know C because the software adaptor hidesit.

So verification engineers become that much more valuable with theability to move back and forth from hardware to embedded softwaretesting for full system level quality.

The software adaptor enable to bind e ports to hardware and softwarein the same way, as shown in the examples below:

Example of binding an ePort to theHW world
keep agent() ==Verilog”;
hw_sig: insimple_port of uint is instance;
keep bind(hw_sig,external);
keephw_sig.hdl_path() == hw_sig_i”;

Example of binding an ePort to theSW world
keep agent() ==”GSA_CVL”;
sw_sig: insimple_port of uint is instance;
keep bind(sw_sig,external);
keepsw_sig.hdl_path() == sw_sig”;
keepsw_sig.external_type() == “unsigned int”;

These software adaptors also provide “wrapper functions,” so oncethe framework is integrated it is very easy to access softwareinternals or add new sofware API calls. There is no need to modifysource code, which saves a huge amount of time and effort. The wrapper(stub) file is automatically generated by Cadence's Specman testbenchautomation solution.

The software adaptors also make it so you don't have to compile Ccode. The C code is precompiled. Throughout the verification flow,there's far less work. You have fewer tools and scripts to maintain.Everything is in your test file. You can just load it.

Conclusion
In the past the verification engineers and software engineers didalmost the same tasks. They had to write code, they had to debug it andfinally they had experience the same pitfalls. However, there was noknowledge transfer between the software and the hardware verificationteams. As a result the actual software development started very late,many times after first silicon was available.

HW/SW co-simulation/co-verification is quite an old idea and astandard task in today's SOC designs. Truly innovative however is theapproach of applying the same advanced verification methodology(coverage driven verification) that was used for the hardwareverification earlier on in the verification process to SW as well.

The SW is wrapped into an eVC and becomes transparent to theverification engineer who is now able to handle software componentslike any other HW eVC in his high level HW/SW verification environment.

But there are also other big advantages. First, it allows us also toinclude the SW much earlier HW module verification because we don'tneed to have the whole chip infrastructure (e.g CPU subsystem) in placeto execute the software. We simply execute the software on the host andall the API and HW-interfaces are provided in the layered verificationenvironment

Second, due to this host code execution the simulation speed is muchfaster, because we do not have to simulate every CPU instruction of theSW. We just have to simulate the interaction (register accesses, callof interrupt service routines) of the SW with the HW module. Thesoftware runs so to say in zero time from HW simulation cycles point ofview.

In the case of the HD TV system described earlier, we were are ableto deliver a fully pre-verified hardware software sub-system to ourchip-level integration team — and instead of configuring the 6000register fields of our sub-system, the chip-level integration teamsaved a huge amount of time and effort by just having to call a Croutine setApplicationMode() .

Ernst Zwingenberger is currentlyhead of R&D for Verification at ElCamino GmbH. From 2004 to2006 Ernst was System On ChipVerification Engineer responsible for Coverage Driven VerificationMethodology, Verification Concepts and Verification Project Leading atMicronas GmbH in Munich. Prior to that, he was a VerificationConsulting Engineer at El Camino GmbH (Verification Alliance Partner)focused on Coverage Driven Verification and Design of reusableVerification Components.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.