Five steps to reliable, low-cost, bug-free software with static code analysis

Nikola Valerjev, Green Hills Software

July 05, 2014

Nikola Valerjev, Green Hills SoftwareJuly 05, 2014

Numerous studies have shown increases in software code reliability and developer efficiency through the use of static source analysis. There is no dispute that there are large benefits to be gained for most organizations.

One problem is that there are no standards that specify what static source analysis means, or what types of defects it should be detecting. Several government agencies, including Department Of Homeland Security, National Institute of Standards and Technology, and FDA have been trying to develop a set of guidelines and recommendations to specify exactly that, but there has been no clear solution as of yet.

One of the fundamental issues has been the difficulty in defining what defects need to be detected and at what rates. However, that doesn’t take away from the fact that static source analysis has been proven as an extremely effective way to solve many issues that software developers are faced with.

With so many choices and no standards, a new problem arises: How do you pick a static analysis tool that is right for your organization?

Since you will be instituting a new practice for your developers, you also need to make sure you overcome any initial adoption resistance. After all, static analysis (SA) tools are designed to expose programmers’ bugs and make them public. No one likes to hear how bad their coding is. Even the most accurate SA tools will get a negative emotional response at first, once a programmer realizes that an automated tool was “smarter.”

Finally, it is hard to directly measure positive effects of static analysis. Instead, you know it is working by the lack of negative things, and can only be sure of it in retrospect. Because of this delayed response, programmers don’t get the instant gratification that will motivate them to use the tool all the time, even though to get the greatest benefit of SA tools, they should do so. A certain amount of discipline will be required to make sure your SA tools are used all the time by everyone.

These issues are the essential questions we will try to answer in the rest of this paper.

Steps to Success
The process of adopting static source analysis in an organization is not easy. Static source analysis will offer different benefits to different organizations, depending on their needs. For instance, a company that consists of only a few developers and is looking to really crack down on the security of their software will have very different needs than a company that has dozens of developers and is looking to boost their release cycle efficiency.

The process to adopting static source analysis can be broken down into five distinct steps:

  • Step 1: Ask yourself: Do I need static source analysis
  • Step 2: Develop evaluation criteria
  • Step 3: Evaluate Trial Versions
  • Step 4: Overcome Growing Pains
  • Step 5: Gain Continued Adoption

Step 1: Ask Yourself - Do I need static source analysis
The whole point of this step is to answer that question, and prepare for possible Step 2. If you realize that there will be very little to gain from any type of SA tool simply because the problems you have are unrelated to what static analysis can do, you should spend your time and energy on things that are more important. However, keep an open mind in finding out what your problems truly are.

For some of the questions it may seem that you need to do extensive statistics studies, instituting all sorts of bureaucracy. This is not so – you can get most of the information just by sampling a few things. Remember, what you are looking for is large obvious problems. Sampling a dozen or so data points should reveal those big problems pretty quickly. Otherwise, it is not worth your time.

These are the questions you should ask yourself:

Are any of the following problems of great concern: software reliability, security, slow release cycles, buggy software in the hands of customers?
An eager salesperson might look for an obvious “yes” to profoundly exclaim “you need static source analysis!” However, you really should figure out how much of a problem these things are, and what is causing them. For instance, if slow release cycles are a problem, is that because quality assurance (QA) and developers spend a lot more time finding and fixing software bugs than you originally predicted, or is something else a problem (e.g. server that all of the tests run on keeps crashing)? If it’s the former, your static analysis (SA) tools may help you. If it’s the latter, you might want to have a heart to heart talk with your system administrator and order some new servers.

Are software reliability and/or buggy software in the hands of customers important?
If software reliability is an issue, refer to your database of incidents. If you don’t have one, start one right now. It’s amazing what you can learn about your own software once you let other people use it and comment on it (a.k.a complain). See what types of problems are most common. If bugs in software rank very high, figure out what types of problems are causing it and how they are being resolved.

Is security important?
This one is harder to detect through a database of incidents. If you have had security problems reported, that on its own is not good. But, if you are the type of person who sees the silver lining in everything at least the good news is that you know you have security problems that you should be solving.

If you haven’t had security problems reported, it pretty much doesn’t mean anything. You need to look at whether there is any potentialof having your software attacked, and what is it that you are doing to prevent it. For instance, if your source base is large, and no one in your organization truly understands how it all ties in together, chances are you may have code paths that have security holes in them. Using static analysis may be a good idea. Security is a much larger topic, and static analysis is a part of a much larger set of methods that you need to be instituting when designing and implementing secure software.

What are the types of bugs that you commonly see?
While SA tools can largely differ in the types of defects that they detect and how well they do it, there is a common set of problems that most of the tools will be able to do. Most of them will detect things like NULL-pointer dereferences, resource leaks, uninitialized variables, buffer overflows, etc. If you have no idea what those things mean, find some whitepapers and brochures that SA tool providers will be more than eager to give you. Alternatively, just Google for it – these are problems that C/C++ programmers face all the time, with volumes of text having been written on the subjects.

What has been the most expensive bug you have encountered?
This is a loaded question, because it forces you to quantify the importance of bugs. Every product will have that one component that everyone knows it’s riddled with bugs. It was probably developed as a prototype over a weekend by an engineer who has long ago left the company, and today it’s just a quagmire or half-baked ideas that by some miracle still makes it into the release. Chances are, the perception that your engineers have is that this product is in serious need for improvement and should be at the top of the priority list, while your customers probably don’t even know it exists or have learned how to get around without it. By knowing what bugs are actually causing you to lose money, make your customers upset, and slow down releases, you can make much more informed decisions and what bugs actually need to be addressed. Then look to see if the costly bugs match defects that SA tools can detect.

Have you tried static analysis tools before (and been disappointed)?
There are many reasons why SA tools might not have worked for you in the past. Where the tools not good at finding defects? Did you have problem getting programmers to use them? What were those problems (false positive rate, execution time, portability, bad user experience)? How often would SA tool be run (all the time, once per week, before release, never)? If you stopped using it, how long ago was it? Since then, did you detect any difference in software qualify, speed of releases, etc?

Remember, SA tools vary greatly in what they can detect, how they integrate with your build environment, how fast they execute. If you decide based on other questions that you need SA tools, how will you prevent the rejection from occurring this time around?

Step 2: Develop Evaluation Criteria
If you have gone through the first step and concluded that some sort of static analysis tools should be able to help you in significant ways, it is time to figure out what it is that you actually need from a solution.

The goal here is to come up with the evaluation criteria that you can use when later evaluating trial versions. The important thing to remember is that with static analysis, there is no such thing as “one size fits all.” The process of choosing it is very tightly coupled with the problems you are trying to solve. These are the factors you should consider:

Trial availability
If you can’t see the tool run on your code, you may as well stop here. The effectiveness of an SA tool is hard to measure through a check box list because static analysis can mean so many different things, and SA tools can vary a lot. There are no clear quantifiable measures to compare apples to apples. The best approach you can take is to give each tool a spin on your sources, and look for their effectiveness.

Most SA tool vendors will at least show you the tools in action on your code when they make a site visit, but some will not want to leave the results behind for the fear of losing the sale (thinking that once you have the results, you would have no need to purchase the product). You should work with the vendors to retain some piece of information that you can use for later analysis of your other factors.

Price
This one seems to be one of the most popular factors. It makes certain amount of sense: since a lot of organizations will adopt SA tools to improve the efficiency and get to market faster, it is important to figure out whether the cost will outweigh the benefit. Price might be of less importance to people looking for higher security, since static analysis in that case is not just a process optimization, but a quality enhancement.

Pretty much what this boils down to is doing the ROI (return-on-investment) study.

SA tool vendors might use ROI information from studies done with other companies to help support their claims to buy their product. You need to be very careful when you step into these waters. What worked for one organization might be completely different for another. You will have a much better idea of what applies to you if you do ROI on your own process.

Here is an example of where you may start: Many studies have attempted to estimate the cost to produce and deliver software to market. It is estimated that it cost $1000 to develop each line of code on the space shuttle. Developing software to the stringent DO- 178B Level A standard (for critical aircraft systems) has been estimated at hundreds of dollars per line. On the lower end, Red Hat Linux has been estimated to cost $33 per line of code. Other estimates generally place the cost of good quality commercial software in the range of $30 to $40 per line of code.

Yet other studies have estimated how this development time is spent. Most concur that more than half of software development time is spent debugging: identifying and correcting software defects. If we use an estimate of $30 per line of code in total cost, this means that organizations conservatively spend $15 to debug each line of code. If static analysis can eliminate a portion of that code, you can come up with reasonable return gains.

If the task of doing ROI seems daunting, remember that all you need to do is collect some random sample information on the cost of fixing bugs. That should give you a pretty good approximation of where you stand.

Since price and cost can be measured in some many different ways, it is easy to get sidetracked and fall into a trap where you just do number crunching. Moreover, a good salesperson will always have a way to present their numbers better than the competition. Remember, since there is so much variation on the effectiveness of different SA tools, you should leave this item for later analysis. Some other factor might eliminate a choice a lot more easily.

True Positive Rate
Your SA tools should find significant number of errors to make them worthwhile. The higher is better. The whole reason you are considering adopting the tools is to solve the problems you have identified in Step 1.

The best way to measure this is to turn SA tool lose on some of your sample code taken directly out of your product. You may want to purposely include back in some of the bug fixes, for which you speculated during Step 1 that SA tools should be able to catch.

If you are considering SA tools for higher security, you will want this metric to be extremely high. You will want to see anything that everything that could possibly go wrong, as long as it is consumable, even at the expense of giving you too much information.

< Previous
Page 1 of 2
Next >

Loading comments...