Verifying embedded software functionality: Combining formal verification with testing
Figure 5.16 below shows the overall role a method such as software model checking can play in software validation. Here we assume that only the software is available for validation—we do not have a finite-state machine–like model of the software available for model checking.
Click on image to enlarge.
Figure 5.16. Validation flow with software model checking.
Naturally, this means we need to synthesize a finite state transition system out of the software itself. This finite-state transition system is used for model checking.
Given a program P, we abstract it to another program P’ where all data variables in P’ have the type boolean. We call P’, and it implicitly represents a finite-state transition system whose states can be explored via model checking.
The temporal logic properties verified through model checking may be provided by the programmer (where they typically denote desirableproperties of the software—typically invariant properties), or they may be found by “specification mining” from the test cases.
Specification mining involves finding a software specification (such as invariant properties) by running the software for different test cases and documenting the observed behavior. The Daikon tool, in particular, is useful for mining invariant properties from observed behavior of a program on test cases.
Combining Formal Verification with Testing
Testing and formal verification are two complementary techniques for validating functionality. Testing exposes few program behaviors, but any wrong behavior thereby exposed is truly a wrong behavior—there are no false alarms reported by testing.
On the other hand, formal verification exposes all program behaviors, albeit at an abstract level. However, because of the abstractions involved, “bugs” reported by formal verification can be false alarms. In other words, testing and formal verification have complementary strengths and weaknesses.
Testing methods work at the concrete level: They execute the program with concrete inputs. Naturally, they cannot cover all program behaviors—exhaustively covering all concrete inputs of any real-life program is an impossibility!
Directed testing methods (discussed earlier in this series) alleviate this concern somewhat—they are trying to cover all program paths, rather than all concrete inputs. Thus, one may say that directed testing methods partition the input space of a program, such that two inputs i, i’ are in the same partition if and only if inputs i, i’ exercise the same program path.
Given such a partitioning of the input space, directed testing methods are trying to expose one concrete input per partition.
However, the number of program paths in any real-life program is also huge—even for a program without loops, b branches can lead to anywhere between b=1 to 2b paths depending on the control flow (see Figure 5.21 below).
Clearly, if we have b branches containing 2b acyclic paths within a loop with L iterations (where L is large), we will have (2b)L paths—a huge number! Hence, covering all program paths simply via testing is also not feasible for much real-life software.

Figure 5.21. An acyclic program with 3 branches having (a) 23 program paths and (b) 3_1 program paths.
Formal verification methods, on the other hand, abstract the behaviors of a program. One example of such an abstraction is predicate abstraction where we abstract the memory store of a program while leaving the control flow unabstracted.


Loading comments... Write a comment