Doing C code unit testing on a shoestring: Part 3 - Building a unit test framework

Ark Khasin, MacroExpressions

September 25, 2007

Ark Khasin, MacroExpressions

A reasonable framework for effective unit testing can be based on the notion of a test set - a collection of tests covering one unit. A test in the set consists of:

* Description
* Acceptance criteria
* Test setup code (optional)
* A number of test cases
* Test cleanup code (optional)

A test case consists of:

* Description (optional)
* Parameters (optional)
* The number of repetitions
* Test case execution code (which actually exercises a function you're testing)

I shall not, of course, insult your intelligence by elaborating on how to model this framework in C and how to write the generic code executing a test set. A few pointers are due here though.

Instrumenting the unit under test
We have put together a magic whereby compiling the UUT with the definition

INSTRUM_HEADER="instrum_common.h"

on the command automatically instruments the UUT. There can be legitimate cases, however, where the common instrumentation is not what you want; e.g., as we discussed in the beginning, instrum_common.h has

#define INSTRUM_STATIC /*nothing*/
and you want
#define INSTRUM_STATIC extern

The solution is to invent your own instrumentation header instrum_myown.h and pass it as INSTRUM_HEADER. A preferred way is not to re-do all the work but to include instrum_common.h in instrum_myown.h, undefine the inadequate implementation, and define it in an appropriate fashion, e.g.

#include "instrum_common.h"

#undef INSTRUM_STATIC
#define INSTRUM_STATIC extern

Producing the test set output
The purpose of the execution of a test set is to generate an output file. All output item should indicate whether it is produced by harness, instrumentation or a stub, for easier comprehension. Depending on the setup, the output should always produce either HTML output or plain-text output.

The (nicely formatted) HTML output can then be used for manual inspection of the execution results and for deciding whether the test set passed or failed, which is necessary if some acceptance criteria are manual. The HTML output can be easily equipped with additional information (date/time, user, unit under test, version etc.) and be presented to the auditor as part of test documentation.

The plain-text output can be used for regression testing (I optimized the code; does it still work as before?). It can also be used for post-processing of your choosing so that additional information can be extracted.

Acceptance criteria
Acceptance criteria should be stated for a test in advance; printing them (see next section) serves as documentation. They state when you consider a test case passed or failed, and they can be manual or automatic.

A manual criterion simply describes what is expected to come out of the test case; all such criteria are considered passed if you accept the test set output file as a reference. An example of a manual criterion is a notification that a certain function was called, or the lack of such notification.

An automatic criterion produces the expected output independently (like by a different algorithm of computations or as pre-tabulated values) and programmatically compares the result of the test case execution with the expected result. The pass/fail info should be printed with the test case output and propagate up to test and test set summary result.

Analyzing the output
Plain-text output is of particular interest for code coverage analysis. As discussed earlier, if your code consists only of if/else statements, 100% code coverage is achieved if controlling expressions in all if statements have been both true and false.

Similarly, if your code doesn't use the switch statement, 100% code coverage is achieved if controlling expressions in all if, while and for statements have been both true and false, provided that there is no unreachable (dead) code. If there is, the compiler (or at least Lint) should inform you about that.

(Note however that if only 99.9% of controlling expressions have been both true and false, we cannot conclude that the code coverage is 99.9%: it can be less because of a variety of nested execution paths not covered at all.)

So long as each controlling statement is instrumented and is uniquely identified in the output, it is a matter of simple post-processing of the output file to prove (or disprove) that it was true and false at least once during test set execution.

Now let's add the switch statements to the mix. Assuming that all controlling expressions in all if, while and for statements had been true and false at least once, you achieved 100% code coverage if and only if each of the case and default statements had been hit at least once. (If you have switch statements without a default, it is considered not a good practice yet it can be dealt with. Still, this case is more complicated and is omitted here.)

In the output file, instrumented case and default statements that were executed would announce themselves. To verify that all of them were executed, you can scan the source of the UUT to extract the case and default statements and match them against their announcements in the test output; if each of them was announced, you've got 100% code coverage, otherwise, you haven't. This can be done with a not-so-sophisticated script whose complexity may depend on whether or not you want to account for nested switch statements.

< Previous
Page 1 of 3
Next >

Loading comments...

Most Commented

Parts Search Datasheets.com

KNOWLEDGE CENTER