Verifying embedded software functionality: The power of dynamic slicing

Abhik Roychoudhury

August 7, 2012

Abhik Roychoudhury

Dealing with Large Slices
The reader may be concerned that the dynamic slice of real-life programs may be too large for human comprehension. Here we would like to point out that dynamic slicing is a core method of program understanding, debugging, and validation. There exist very many different improvements to the dynamic slicing method to reduce the slice size, such as not computing the full slice (e.g., certain programmers may inspect only the chains of data dependencies).

A most recently proposed method called hierarchical dynamic slicing addresses this problem as follows. It builds a dynamic slicing method where the human programmer is gradually exposed to a slice in a hierarchical fashion, rather than having to inspect a very large slice after it is computed. The key idea is simple—we systematically interleave the slice computation and comprehension steps.

Conventional works on slicing have concentrated only on the computation of the slice, comprehension of the slice being left as a postmortem activity. In hierarchical dynamic slicing, the two activities are integrated in a synergistic fashion as follows:

1 - Computation of the slice is guided (to a limited extent) by the human programmer so that very few control/data dependencies in a large slice need to be explored and inspected.

2 - The programmer’s comprehension of the slice is greatly enhanced by the nature of our slice computation, which proceeds hierarchically. Thus, for programs with long dependence chains, this allows the programmer to gradually zoom in to selected dynamic dependencies.

To understand the potential benefits one can gain from the method, let us examine the factors that make the comprehension of dynamic slices difficult, which include:

1 - Many programs have long dependence chains spanning across loops and function boundaries. These dependence chains are captured in the slice. However, the slice being a (flat) set of statements, much of the program structure (loops/functions) is lost. This makes the slice hard to comprehend.

2 - Programs often also have a great deal of inherent parallelism. So, a slice may capture many different dependence chains.

Let us now discuss how hierarchical computation/exploration of slices can help programmers to comprehend large slices containing these two features: (a) long dependence chains, and (b) many different dependence chains. Figure 5.7a shows an example program with a long dependence chain. Consider an execution trace of the program ...3, 4, 5, 6 - where lines 3,4,5,6 of Figure 5.7a below are executed. Slicing this execution trace w.r.t. the criterion (line6, y) (i.e., the value of y at line 6) yields a slice that contains lines 3, 4, 5,6 as well as lines inside the body of the functions f 1, f 2, f 3.

Figure 5.7. (a) A program with a long dynamic dependence chain. (b) The corresponding phases. Dashed arrows represent dynamic dependencies that a programmer needs to follow for debugging.

In other words, because the slice is a (flat) set of statements, the program structure is lost in the slice. This structure is explicitly manifested in Figure 5.7b, where we show the dependence chain in a hierarchical fashion. In other words, the dependencies inside the functions f 1, f 2, f 3 are not shown. Here, a hierarchical exploration of the dependence chains will clearly be less burdensome to the programmer.

Thus, in Figure 5.7b, by inspecting the dependencies hierarchically, the programmer may find it necessary to inspect the dependencies inside a specific function (say, f 2). As a result, we can avoid inspecting the dependence chain(s) inside the other functions (in this case f 1, f 3). Now, let us consider programs with many different dependence chains.

Figure 5.8a below shows a schematic program with several dependence chains, and hence substantial inherent parallelism. If the slicing criterion involves the value of y in line 6, we need to consider the dependencies between y and x3 and those between y and x2, as well as y and x1. These three dependencies are shown via broken arrows in Figure 5.8b. Again, with the programmer’s intervention, we can rule out some of these dependencies for exploration and inspection.

Figure 5-8. (a) A program with inherent parallelism (several dynamic dependence chains). (b) The corresponding phases. Dashed arrows represent dynamic dependencies that a programmer needs to follow for debugging.

In summary, the hierarchical dynamic slicing method works as follows. Given an execution trace (corresponding to a program input) containing an observable behavior that is deemed an “error” by the programmer, we divide the trace into phases. This division is typically done along loop/procedure/loop-iteration boundaries so that each phase corresponds to a logical unit of program behavior.

Only the interphase data and control dependencies are presented to the programmer; the intraphase dependencies are completely suppressed. The programmer then identifies a likely suspicious phase, which is then subjected to further investigation in a similar manner (dividing the phase into subphases, computing dependencies across these subphases, and so on). This process continues until the error is identified.

Of course, an underlying assumption here is that the programmer will be able to identify the erroneous statement once it is pointed out to him or her. One may comment that such a hierarchical exploration of dynamic dependencies involves programmer’s intervention, whereas conventional dynamic slicing is fully automatic.

Here we should note that the process of error detection by using/exploring a dynamic slice involves a huge manual effort; the manual effort in exploring/ comprehending the slice simply happens after the computation of the slice. However, in the hierarchical method, we are interleaving the computation and comprehension of dynamic dependencies.

As in dynamic slicing, the computation of the dynamic dependencies is automatic in our method; only the comprehension involves the programmer. Moreover, the programmer is exposed to the complex chain(s) of program dependencies gradually, rather than all at once, thereby allowing better program comprehension.

To read Part 1, go to Why it’s necessary.
Next in Part 3: Fault localization

Abhik Roychoudhury is associate professor at the National Univdrsity of Singapore. He received his Ph.D. in computer science from the State University of New York at Stony Brook. His research interests are system modeling and validation with a specific focus on embedded software and systems.

Used with permission from Morgan Kaufmann, a division of Elsevier. Copyright 2009, from "Embedded Systems and Software Validation," by Abhik Roychoudbury. For more information about this title and other similar books, visit www.elsevierdirect.com

< Previous
Page 4 of 4
Next >

Loading comments...

Most Commented

  • Currently no items

Parts Search Datasheets.com

KNOWLEDGE CENTER