Dealing with the challenges of integrating hardware and software verification - Embedded.com

Dealing with the challenges of integrating hardware and software verification

The design verification industry has leaped forward from its simplistic roots when directed stimulus was used to perform verification with HDL. The approach has now shifted to using advanced programming techniques to create richer and reusable models; directed random stimulus to reduce test case writing efforts; and functional and code coverage to measure verification progress objectively.

While these techniques and methodologies have improved design quality and verification efficiency, there is still work to accomplish, especially for complex chip designs.

Chip-design teams already have a good grasp of block- and chip-level verification. System-level verification is now the arena where the biggest challenges and opportunities lie. The verification community—made up of the EDA industry and its users—is investing heavily to find solutions for this. Why? Because of software drivers and firmware.

Today's silicon intellectual property (SIP) and SoCs are increasingly dependent on firmware and device drivers as key deliverables for the end solution. In the current scenario, a silicon or IP provider must invest substantial resources to provide software IP along with their SIP. Gone were the days of waiting for silicon to arrive before developing and debugging device drivers and embedded software.

Often, design teams use an equivalent or greater number of software engineers in proportion to hardware engineers because the industry needs to further integrate hardware and software implementation and debug. This article explores challenges and solutions in the closer integration of software development and validation with silicon design and verification.

Figure 1: The Denali flash cache platform exists as an SoC employing an on-board processor, hardware and software features to optimize performance and increase system reliability.

Sample configuration
A NAND flash cache controller platform developed by Denali Software is used to illustrate some of these approaches to systemlevel verification challenges. This platform provides key caching functionality for PC applications through the PCIe interface, and uses standard NAND flash memory devices and an ancillary DRAM device.

The platform exists as an SoC using an on-board processor, hardware and software features to optimize performance and increase system reliability. The main hardware (RTL) components of this platform include PCIe controller IP, DRAM controller IP, NAND controller IP, DMA engines, microprocessor and on-board memory. The main software components of this platform include low-level driver (LLD), flash translation layer (FTL) and flash file system (FFS).

Design hurdles
Several tasks and goals affect software and system-level verification in various ways and at different times in the development process. Early in the system's development, architects may want to model the whole system so they can partition functionality between the hardware and software. To do some rough performance or power analysis, transaction-level modeling (TLM) is often used at this phase.

Since RTL may not exist, these behavioral models are also a good starting point for early software development and can offer more speed than RTL at the expense of some accuracy.

For the NAND flash cache controller platform, the TLM were developed in SystemC, based on TLM standards released by the Open SystemC Initiative. The hardware and software have been configured for optimal performance based on specific application needs.

As the hardware (RTL) design nears completion, the real challenge for system-level verification often deals with the software that must run on the platform and will directly affect the end product's ability to obtain optimum performance and reliability.

In our platform, the FTL provides most of the NAND policy control including block management, logical to physical address translation, error correction and wear leveling.

Wear leveling is a unique problem to flash and is critical to solve. Continuously writing a specific sector with new information in a flash device will cause that location to become permanently damaged.

Thus, in NAND flash storage solutions, a mechanism must be provided to physically move the stored information in the flash device as it is written to provide optimal service life for the storage medium. The challenge is in providing this capability with the right blend of hardware and software while not degrading system flexibility or performance.

This issue is just one of the many associated with flash and is similar to other complexities being seen in industry standard protocols like PCIe and USB.

The tight interplay between hardware and software presents challenges to the design verification engineer as well. In traditional verification where there is lack of available software, the DV engineer would often craft his own LLD and FTL software in his preferred high level verification language to test the hardware and software.

When silicon arrives, the actual LLD and FTL are developed and debugged against the silicon. This creates several problems. First, by staffing duplicate efforts, resources are wasted.

Moreover, the added robustness that could be achieved by involving the real software in pre-silicon design verification activities is missed. Finally, doing this would result in greater time-to-market compared with debugging the hardware and software in parallel.

Fresh approach
To achieve the next level of productivity in design and verification, a methodology must evolve to adapt to these mixed systems of hardware and software. The most critical requirement is that the real software must be designed once, and used throughout the development process.

For the platform previously described, since the LLD must be developed during hardware design, the hardware designer is most suited to deliver this important piece of software as it involves an intimate knowledge of the hardware capabilities and usage model. (The FTL, however, belongs squarely with the software team and is integrated with the LLD via a pre-defined API. )

Following this, the software is required for all aspects of development and verification. The most common method of accomplishing this is via HW/SW co-verification or co-simulation.

Consider the design verification aspect in more detail. With this flow, the LLD and FTL are integral parts of the verification methodology.

Test cases are written against it. If there are esoteric bugs and strange corner cases in the software, the power of modern design verification environment can be leveraged to find them and eliminate them long before the device goes to silicon.

This also shows the design verification engineer how the device is used. Through this process, the scope of design verification can be constrained compared with traditional flows that do not involve the actual software.

This effectively increases the productivity of design verification engineers by eliminating unnecessary and unrealistic scenarios, and it allows them to focus on the functionality used by the LLD and FTL.

Another crucial aspect is the need for randomizing configurations or topologies. Directed random testing has become mainstream, but not to the extent that is necessary. Generating random stimulus alone, without random configurations and random topologies, has limited benefit.

With modern IP solutions, the number of static parameters (ifdef parametersconfiguration space registers ) have grown tremendously. And the only way to effectively deal with this is via randomization.

In the flash cache example, it's critical to test with all the flash parts in different topologies as well as randomize the configuration space within each topology to ensure maximum robustness. Users have also seen that this is critical for measuring performance.

Simply replacing one vendor's flash part with another can yield up to 40 percent difference in the performance of the flash cache.

Although simulation is still the most dominant form of system verification, the requirement to run software and hardware concurrently has resulted in increased adoption of hardware-accelerated simulation, in-circuit emulation and FPGA prototyping.

By accelerating simulation, the level of performance can be increased significantly to allow software engineers to develop and debug more software before the actual silicon becomes available. In-circuit emulation and FPGA prototypes provide the benefit of running at real-time speeds or near real-time speeds.

Two factors make this critical. The first requires the testing of the system against real world workloads or stimulus. This may mean being able to boot or run an entire OS, something far too slow to accomplish in simulation.

Another often overlooked aspect is the fine-tuning of the configuration registers. Experience with the flash platform has shown that arbiter priorities, FIFO watermarks and other CSR settings can only be finally tuned when the device is operating in a system running real-world OS or target applications.

Functional verification has evolved tremendously in the past decade. New methodologies provide great benefit at the block-, chip- and system-levels, but the verification problem has grown beyond hardware alone and software is now the driving factor when creating system level verification environments.

As the lines marking the responsibilities of the HW/SW teams and ownership of implementation and debug are getting blurred, new methodologies must be adopted to effectively validate an entire IP or silicon solution.

Based on the implementation of the NAND flash cache controller platform, the ability to efficiently and optimally design and perform system-level verification can result in a significant competitive advantage, especially as software solutions become expected deliverables along with complex IP or silicon.

Sean Smith is Chief Verification Architect at www.denali.com/ Denali Software Inc.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.