Measuring code coverage is an explicit task in most safety standards for embedded systems, e.g. IEC 61508-3, ISO 26262-6 or DO 178B . Coverage metrics are frequently used as stopping criteria for the testing activity, and to evaluate the adequacy of the test data. If the coverage is insufficient, additional test cases can be added if expected to be beneficial.
We propose that existing requirements on code coverage should be extended with requirements on 100% coverage of the safety requirements. As an example, the paper illustrates how this can be done for the AUTOSAR end-to-end protection library.
On unit level, statement coverage is usually considered to be sufficient for low integrity software, whereas MC/DC is required for high integrity. IEC 61508-3 explicitly requires 100% code coverage regardless of metric. If 100% cannot be achieved, due to e.g. defensive code, an appropriate explanation shall be given.
ISO 26262-6 and DO 178B are not as explicit as IEC 61508-3, but a motivation is needed when 100% is not reached. Here, we address the question: if 100% code coverage is reached, what is its evidential value in a safety argumentation?
It is not hard to find examples where 100% coverage according to some of the common metrics leaves conditions untested and/or severe bugs unspotted in the code.
When a specific code coverage metric is targeted, it can be used to guide automatic test case generation. However, it has been shown that such automatically generated test cases were less effective in finding faults than random ones, and shrinking large test suites.
This paper argues that using code coverage metrics to evaluate the completeness of test cases as prescribed by e.g. ISO26262, is insufficient in a safety context. On the other hand it is impossible to execute test cases that achieve 100% completeness with respect to all possible input data combinations testing all requirements.
To read this external content in full, download the complete paper from the author’s online archives.