Static analysis tip: How to Effectively Apply a Static Analysis Tool

Matthew Hayward

January 21, 2009

Matthew HaywardJanuary 21, 2009

The beginning and end of effective process for static analysis could be summed up as "inspect every defect and fix all defects." As simple as this seems, most practitioners of static analysis will not execute on this approach.

Because static analysis identifies root causes it is easy for a knowledgeable developer with an understanding of the correct design behavior of a piece of code to assess the severity of a given defect within that code. Data suggests that this inspection takes less than five minutes per defect, on average.

This inspection time is orders of magnitude less than the development time required to fix a severe and difficult to detect defect that has already reached the customer. The vast discrepancy in time spent between inspecting a defect of minor importance versus failing to inspect a defect of major importance that impacts a customer argues for an approach of inspecting all the defects.

While most development organizations can rally behind the idea of inspecting every defect, the idea of fixing every defect is much more controversial. This is understandable for two primary reasons:

* Traditional dynamic testing and customer reported defects often require substantial diagnostic effort to fix, regardless of the severity of the issue.

* Static analysis tools will report both false positives, and issues that are of minor importance.

With modern static analysis tools that essentially eliminate diagnostic effort and boast false positive rates under 15%, these objections largely disappear. While there are a number of compelling benefits to fixing all statically reported defects, the fact that each such defect will be a show-stopping customer bug is not among them. Some reasons behind advocating a strategy of fixing all the defects are:

* Pedagogical: By requiring developers to fix all the defects, good habits are instilled. Inconsistent and potentially crash-causing code may have no customer impact when written as part of the test suite, but that is not a good reason to permit such practices in development.

* Human fallibility: The reason why static analysis tools exist and are useful is that humans are not so great at writing perfect code. In choosing not to fix all the defects, we must trust our judgment as to which defects are not important. The only way to be certain every critical defect has been fixed is to fix every defect.

* Efficiency: This is probably the most surprising, and most compelling reason to fix all the defects; doing so will be faster in many cases than not fixing all the defects. The fact is, when some defects are not fixed, a process must be developed to decide which defects to address and which to ignore. This process will time consuming, and developers will often spend more time arguing why a certain defect should not be fixed than the time required to simply fix the bug.

Neal Stephenson coined the term "metaphor shear" to describe the phenomena whereby computer users are surprised by the behavior of an application, due to their subscription to a flawed metaphor. Users of static analysis often suffer from metaphor shear surrounding the notion of a defect. Static analysis reports defects. However, these defects are an entirely different species from those reported by customers or identified by traditional dynamic testing techniques.

The key difference between static defect reports and customer defect reports is that static reports will identify a root cause of an eventual program behavior, whereas customers report the symptoms of such root causes. This means that statically reported defects are easy to understand and fix, but that they will be lacking an inherent sense of priority.

Due to this key difference, it is imperative that the management of development organizations foster the adoption and instill a balanced sense of priority between the two kinds of reports.

When implementing static analysis tools, providing time in the development schedule for using and advocating the removal of all static defects is imperative. It is also important that a knowledgeable developer, who is familiar with the design intent of the code in question, inspect each static analysis defect.

Once all statistically detected defects have been inspected, the argument for fixing all such defects is stronger for several reasons, efficiency surprisingly among them.

Matthew Hayward is Director of Professional Services for Coverity Inc. Since he joined the company in 2005, he's worked with hundreds of development organizations to define and implement strategies for effectively applying static analysis to improve software quality and security. He holds a M.S. in Computer Science, and B.S. degrees in Mathematics and Physics from the University of Illinois.

Loading comments...