Automating C test cases for embedded system verification

As system-on-chip (SoC) designs proceed on their march to greater complexity, test suites containing thousands of lines of code for system-level verification continue to be written by hand, a quaintly old school and ineffective practice defying the adage “automate whenever possible.” This is especially true for C tests that run on an SoC’s embedded processors to verify the entire device prior to fabrication.

Automating verification test composition where possible has been shown to increase productivity for many phases of SoC development. Constrained Random techniques, for example, in a Universal Verification Methodology (UVM) testbench, make use of randomized test vectors directed at specific scenarios to increase coverage. While these have increased verification efficiency at the hardware block level, the design is still perceived as a black box with stimulus, checks and coverage code written separately, still an onerous and error-prone task for large blocks.

It is hard to extend this methodology to the system level, given the need to combine processor test code with I/O transactions, often executed on an emulator or prototyping system. To properly verify an SoC, the processors themselves must be exercised. UVM and other constrained-random approaches do not account for code running on the processors. In fact, to use UVM on an SoC, the processors are often removed and replaced by virtual inputs and outputs onto the SoC bus allowing the sub-system minus the processor to be verified.

SoC verification engineers recognize the limitations of constrained-random testbenches, driving them to handwrite C tests to run on the processors for both simulation and hardware emulation, even though they are limited in fully exercising the SoC design. The performance of these verification platforms is not good enough to run a full operating system (OS), so these tests execute “bare-metal,” which adds a significant overhead to composition effort. It is unusual for handwritten tests, especially without the aid of OS services, to run in a coordinated way across multi-core processors leveraging multiple threads. The result is that aspects of SoC behavior, such as concurrent operations and coherency, are minimally verified.

Automatically generating C tests

Of course, automatically generated C tests will make more efficient use of engineering resources. They also increase coverage. Generated C test cases can exercise more of the SoC’s functionality than handwritten tests and will seek out hard-to-imagine complex corner cases. Multi-threaded, multi-processor test cases can exercise all parallel paths within the design to verify concurrency. They can move data among memory segments to stress coherency algorithms, and coordinate with the I/O transactions when data should be sent to the chip’s inputs or read from its outputs. The overall effect of this is to increase system functional coverage, typically greater than 90% from numbers that are characteristically far lower.

The test generation software, known as Test Suite Synthesis, uses an easy-to-understand, graph-based scenario model that captures intended design behavior. These models may be written using the Accellera Portable Stimulus Standard using native C++ or described visually. Scenario models are created by design or verification engineers as a natural part of SoC development since they resemble traditional chip dataflow diagrams that might be drawn on a whiteboard to explain part of the design specification.

These models inherently include stimulus, checks, coverage detail and debug information, providing the generator with everything it needs to produce high-quality, self-checking C test cases that stress every aspect of the design. Because they are hierarchical and modular, any tests developed at the block level can be reused entirely as part of the full-SoC model, and easily shared with different teams and across projects. Finally, the single model of intent can be decomposed by the synthesis tool to provide the concurrent tests across threads and I/O ports, all synchronized together.

Advantages test suite synthesis

One significant advantage of test suite synthesis is the ability to define coverage goals up-front on the intent model. Once the intent has been specified, the tool can analyze it to understand the number of tests that could be produced and the coverage of functional intent that would be achieved.

For an SoC this can number many thousands of tests. Coverage goals can then be set by constraining the intent to be tested and focusing the tool on key areas. This capability saves the painful iterative loop that occurs in traditional approaches, which is to set the tests up, run the verification tool, understand the coverage achieved, and then reset the tests over and over again.

In one typical project on a large SoC developed by a well-known semiconductor company, the verification engineers reduced test composition time to 20% of what previously required handwritten tests. The automation technology produced more rigorous test cases, increasing coverage from 84% to 97%. In addition, the models are portable.

A single model can generate test cases for virtual platforms, register transfer level (RTL) simulation, emulation, field programmable gate array (FPGA) prototypes or an actual chip in the lab undergoing post silicon validation.

Debug is another time sink for engineers, especially at the SoC level. If a test case uncovers a lurking design bug, the verification engineer must understand which test triggered the bug to track down its source. A test case failure might be due to a mistake in the scenario model, so it must be possible to correlate the test case back to the graph where the design intent was captured. This process creates highly modular and self-contained tests that are easily decomposed, such that the test executed to bug discovered is easy to see.

Application scenarios

Synthesized test cases can exercise realistic use cases, called application scenarios, for the design. For example, consider the digital camera SoC shown in figure 1.

click for larger image

Figure 1: Image Processing SoC Example. (Source: Breker Verification Systems)

The SoC block-level components include two processors, the peripheral devices and memory. A simple graph for the SoC is shown below the block diagram. The graph includes the possible high-level paths that may be exercised in the SoC verification process. For example, one possible scenario, expressed in the top path of the graph, reads a JPEG image from the SD card and passes it to the photo processor via an allocated region in memory. The image is processed into a form that can be displayed and loaded into a second block in memory. From there, it is passed to the display controller. Of course, each one of these high-level blocks is hierarchical in nature with many actions and decision being executed as part of the process.

The synthesis tool will take the randomized test and schedule them appropriately. In the simplest form, as shown in the figure, the test might be scheduling into a single thread, followed by the next test and so on. However, the ability of test cases to stress the SoC comes from interleaving applications across multiple threads and multiple processors. The tool will run as many applications in parallel as supported by the inherent concurrency of the design, allocating memory as it goes in as tortuous a fashion as possible. This is also shown as an alternative in the figure where the tests are spread across three threads, making use of various regions allocated across the SoC memories.

Of course, this example is presented at a high level to make the process clear. In reality, the hierarchical graph will be flattened by the synthesis tool, creating a large number of actions and connections. These will also include randomized decisions, which need to be run through a solver algorithm. As the graph is walked, AI planning algorithms are employed that inspect the desired outputs and optimize the input tests to match this. The synthesis tool includes OS-like services that allocate memory, provide address map access, process interrupts and other tasks required to complete test structures. The tests are then scheduled randomly with storage and other resource allocated appropriately.

Conclusion

Much like constrained-random testbenches eliminated manual work for block verification, synthesized test content for embedded processor-based SoCs has been proved to reduce system level verification effort. Furthermore, this solution is now being applied at the block-level and for post silicon validation. In this example, automated C test cases applies the “automate whenever possible” adage, dramatically improving coverage while shortening verification schedules.


Dave Kelf is vice president and chief marketing officer at Breker Verification Systems responsible for all aspects of Breker’s marketing activities, strategic programs and channel management. He most recently served as vice president of worldwide marketing solutions at formal verification provider OneSpin Solutions. Among his prior positions, Kelf was president and CEO of Sigmatix, Inc.; at Cadence Design Systems, he was responsible for the Verilog and VHDL verification product lines; and at Co-Design Automation and then Synopsys, he oversaw the successful introduction and growth of the SystemVerilog language. Kelf holds a Bachelor of Science degree in Electronic Computer Systems from the University of Salford and a Master of Science degree in Microelectronics from Brunel University, both in the U.K., and an MBA from Boston University.

 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.