A Simple New Approach to Hardware Software Co-Verification

Ernst Zwingenberger, El Camino GmbH

March 11, 2007

Ernst Zwingenberger, El Camino GmbH

Coverage-driven verification (CDV) has generated remarkable interest in recent years. Because of its enormously comprehensive capabilities, more and more verification teams are now relying on the CDV approach.

However, implementing a coverage-driven verification environment in a system-level environment requires developing a sequence library, which has proven to be a time-consuming task.

To configure, or reconfigure the library, requires planning for and testing a huge number of interactions with the various portions of the device under test. The challenge poses a significant bottleneck in the verification process of a complex system or device.

A recent project of one of our customers provides a great example. One of their SoCs for use in an HDTV system presented some serious complexity. The SoC included a CPU subsystem, memory manager, and multiple bus interconnects. To test the subsystem, we were looking at more than 6000 register fields - a potential verification bottleneck to say the least.

Creating a layered verification environment
My team was able to develop and implement a significant shortcut to verify the large number of instances, one that delivered solid evidence that hardware/software co-verification has evolved from a good idea to a reality.

Making use of Cadence's Plan to Closure methodology, we put together a layered verification environment specifically for the customers.

The Plan-to-Closure Methodology is a complete guide for performing powerful verification of blocks, clusters, complex chips and SoCs. It provides documented best practices, golden example, drop-in utilities and training material.

The methodology is broken into two separate versions specifically tailored for design teams versus enterprise multi-specialist teams. Each version of the methodology may be used stand alone or in combination with the other to provide one comprehensive and integrated methodology.

The Plan-to-Closure approach incorporates two important elements - the e Reuse Methodology (eRM) and the System Verification Methodology (sVM) - which we used extensively in building our verification environment.

eRM ensures reusable e Verification Components (eVC) and sVM offers "cookbook" ideas on how to plug together the reusable verification components to a system-level verification environment similar to the one shown in the Figure below. Also, the register eVC is part of sVM.

Figure 1. Layered Verification Environment

Figure 1, above, shows the layered verification environment and it consists of the following layers:

- Hardware layer
- Hardware interface layer
- Register layer
- Software layer
- Application layer (Virtual sequence driver)

The advantage of the layered approach is that you can easily replace a layer with another representation of the layer or you can add a new layer. In our case we added the software layer on top of the layers you usually have in a pure hardware verification environment.

In the environment, integrated into the verification plan, we developed a preconfigured piece of verification IP (VIP) at the subsystem-level or e Verification component.

In reality VIP is just another name for a reusable verification component. An eRM compliant verification component typically consists of a sequence driver and bus functional model (stimuli generation part) and a monitor (including checking and functional coverage).

In our case the components included a bus hardware interface, an interrupt, a register package, and a new software component for the embedded software under verification based on a software extension called a generic software adapter.

Our co-verification environment also included a virtual sequence driver to coordinate all the layers. In other words, the environment provided a software infrastructure to randomly generate sequences of C routine calls. We were suddenly much more capable of achieving functional coverage of the hardware and embedded software.

The main advantages of these methodologies are increased reuse and higher levels of automation. From a reuse standpoint we were able to develop and use reusable verification IP similar to design IP.

And from an automation standpoint we were able to replace the previous directed testing approach with automatic stimuli generation, functional coverage measurement and self-checking verification environments.

We were also able to randomly generate sequences of C routine calls and which enabled testbench automation in the form of automatic generation of a huge number of testcases for the C software.

A higher level of abstraction, less low-level writing
The main idea at work in this story: we're doing less work at the low level. We're using the automation built into the software and the system-level C code to do the tedious configuration work—instead of writing our own register sequences for verification purposes. We're essentially raising the level of abstraction by using enterprise wide system-level automation capabilities.

In the past a verification engineer wrote all the code to configure and to test the device under test. Later after hardware verification was finished and the first silicon was available the software engineers started to write their software. But the tasks of the software and verification engineers are very similar. The software engineer also had to write code to configure it and he also has to write some basic device tests.

Today's SOC are too complex for such a separated approach. When the first silicon is available it takes almost one year to get the software ready. We needed to avoid this double effort between verification and software engineers.

With our environment the software engineer could develop his software well before the first silicon was available. The verification engineer could benefit from the software engineers work by using the actual software instead of writing his own code just for verification purpose.

In other words the verification engineer could apply advanced verification methodologies to the software. These advanced verification methodologies are usual for hardware verification, but not for software verification. In software verification there is still the old directed verification approach in use with all the disadvantages.

The impact to the embedded software verification is the benefit from the advanced verification methodology. It's an exciting advancement because we're diving into a new world of enterprise system level verification (ESL) with embedded software verification that takes advantage of proven verification methodologies, VIP, and testbench automation.

Layered Approach Saves Time
In the standard flow, the testbench developer would write register sequences to configure the DUT. Later, the software guys would have to more or less do the same job. However, given our new layered verification environment, we use the same C routines to configure the DUV and we don't waste any time writing the same routines again.

For example: I might write "do SetColorLUT c_routine." Instead of writing the same function on an instruction level, I write one line of code where otherwise I would have had to write 512.

In the past it was not common to add the software layer to a hardware module or sub-system environment. This means a test writer had to write his test on hardware level e.g. CPU instruction level. In the example we have used in this article he would have to write 512 lines of code to configure a color lookup table:

LCL 0 value
LCH 0 value

LCL 255 value
LCH 255 value

But after adding the software layer to the verification environment he could achieve exactly the same task by writing just one C routine call (SetColorLUT())

The tasks are not only easier, they've helped integrate and coordinate our hardware and software-related verification activities. Now the software engineers can start to develop their software on the simulated hardware RTL code using the layered verification environment. In the past the software development started after the first silicon samples or a FPGA prototype is available which is typically very late in the project phase.

The flexibility of the generic software adaptor
The generic software adaptor offers an approach to verification that is remarkably flexible. It's simply no different if you want to do a C routine, or a bus transfer, or any register read or write. You employ the same test interface you'd use for a normal hardware test within your test bench automation environment.

The eRM provides a generic sequence framework. The framework is the interface for the test writers to the verification environment. Because the framework is generic there is almost no difference for the test writer to define a bus transfer, a register or a C routine sequence as shown in the e code examples below:

extend ANY bus_transfer_sequence {
body()@driver.clock is only {
do bus_transfer keeping {.address in [0x0..0xffff];};
};
};

extend ANY register_sequence {
body()@driver.clock is only {
do reg_read keeping {.address in [0x0..0xffff];};
};
};

extend ANY c_routine_sequence {
body()@driver.clock is only {
do setApplicationMode keeping {.appMode == MODE_1;};
};
};

Any trained engineer who's familiar with writing test benches on hardware is now perfectly able to write the testing environment for software. There's no need to know C because the software adaptor hides it.

So verification engineers become that much more valuable with the ability to move back and forth from hardware to embedded software testing for full system level quality.

The software adaptor enable to bind e ports to hardware and software in the same way, as shown in the examples below:

Example of binding an ePort to the HW world
keep agent() == Verilog";
hw_sig: in simple_port of uint is instance;
keep bind(hw_sig, external);
keep hw_sig.hdl_path() == hw_sig_i";

Example of binding an ePort to the SW world
keep agent() == "GSA_CVL";
sw_sig: in simple_port of uint is instance;
keep bind(sw_sig, external);
keep sw_sig.hdl_path() == sw_sig";
keep sw_sig.external_type() == "unsigned int";

These software adaptors also provide "wrapper functions," so once the framework is integrated it is very easy to access software internals or add new sofware API calls. There is no need to modify source code, which saves a huge amount of time and effort. The wrapper (stub) file is automatically generated by Cadence's Specman testbench automation solution.

The software adaptors also make it so you don't have to compile C code. The C code is precompiled. Throughout the verification flow, there's far less work. You have fewer tools and scripts to maintain. Everything is in your test file. You can just load it.

Conclusion
In the past the verification engineers and software engineers did almost the same tasks. They had to write code, they had to debug it and finally they had experience the same pitfalls. However, there was no knowledge transfer between the software and the hardware verification teams. As a result the actual software development started very late, many times after first silicon was available.

HW/SW co-simulation/co-verification is quite an old idea and a standard task in today's SOC designs. Truly innovative however is the approach of applying the same advanced verification methodology (coverage driven verification) that was used for the hardware verification earlier on in the verification process to SW as well.

The SW is wrapped into an eVC and becomes transparent to the verification engineer who is now able to handle software components like any other HW eVC in his high level HW/SW verification environment.

But there are also other big advantages. First, it allows us also to include the SW much earlier HW module verification because we don't need to have the whole chip infrastructure (e.g CPU subsystem) in place to execute the software. We simply execute the software on the host and all the API and HW-interfaces are provided in the layered verification environment

Second, due to this host code execution the simulation speed is much faster, because we do not have to simulate every CPU instruction of the SW. We just have to simulate the interaction (register accesses, call of interrupt service routines) of the SW with the HW module. The software runs so to say in zero time from HW simulation cycles point of view.

In the case of the HD TV system described earlier, we were are able to deliver a fully pre-verified hardware software sub-system to our chip-level integration team -- and instead of configuring the 6000 register fields of our sub-system, the chip-level integration team saved a huge amount of time and effort by just having to call a C routine setApplicationMode().

Ernst Zwingenberger is currently head of R&D for Verification at El Camino GmbH. From 2004 to 2006 Ernst was System On Chip Verification Engineer responsible for Coverage Driven Verification Methodology, Verification Concepts and Verification Project Leading at Micronas GmbH in Munich. Prior to that, he was a Verification Consulting Engineer at El Camino GmbH (Verification Alliance Partner) focused on Coverage Driven Verification and Design of reusable Verification Components.

Loading comments...

Most Commented

  • Currently no items

Parts Search Datasheets.com

KNOWLEDGE CENTER