Five steps to reliable, low-cost, bug-free software with static code analysis

Numerous studies have shown increases in software code reliability and developer efficiency through the use of static source analysis. There is no dispute that there are large benefits to be gained for most organizations.

One problem is that there are no standards that specify what static source analysis means, or what types of defects it should be detecting. Several government agencies, including Department Of Homeland Security, National Institute of Standards and Technology, and FDA have been trying to develop a set of guidelines and recommendations to specify exactly that, but there has been no clear solution as of yet.

One of the fundamental issues has been the difficulty in defining what defects need to be detected and at what rates. However, that doesn’t take away from the fact that static source analysis has been proven as an extremely effective way to solve many issues that software developers are faced with.

With so many choices and no standards, a new problem arises: How do you pick a static analysis tool that is right for your organization?

Since you will be instituting a new practice for your developers, you also need to make sure you overcome any initial adoption resistance. After all, static analysis (SA) tools are designed to expose programmers’ bugs and make them public. No one likes to hear how bad their coding is. Even the most accurate SA tools will get a negative emotional response at first, once a programmer realizes that an automated tool was “smarter.”

Finally, it is hard to directly measure positive effects of static analysis. Instead, you know it is working by the lack of negative things, and can only be sure of it in retrospect. Because of this delayed response, programmers don’t get the instant gratification that will motivate them to use the tool all the time, even though to get the greatest benefit of SA tools, they should do so. A certain amount of discipline will be required to make sure your SA tools are used all the time by everyone.

These issues are the essential questions we will try to answer in the rest of this paper.

Steps to Success
The process of adopting static source analysis in an organization is not easy. Static source analysis will offer different benefits to different organizations, depending on their needs. For instance, a company that consists of only a few developers and is looking to really crack down on the security of their software will have very different needs than a company that has dozens of developers and is looking to boost their release cycle efficiency.

The process to adopting static source analysis can be broken down into five distinct steps:

  • Step 1: Ask yourself: Do I need static source analysis
  • Step 2: Develop evaluation criteria
  • Step 3: Evaluate Trial Versions
  • Step 4: Overcome Growing Pains
  • Step 5: Gain Continued Adoption

Step 1: Ask Yourself – Do I need static source analysis
The whole point of this step is to answer that question, and prepare for possible Step 2. If you realize that there will be very little to gain from any type of SA tool simply because the problems you have are unrelated to what static analysis can do, you should spend your time and energy on things that are more important. However, keep an open mind in finding out what your problems truly are.

For some of the questions it may seem that you need to do extensive statistics studies, instituting all sorts of bureaucracy. This is not so – you can get most of the information just by sampling a few things. Remember, what you are looking for is large obvious problems. Sampling a dozen or so data points should reveal those big problems pretty quickly. Otherwise, it is not worth your time.

These are the questions you should ask yourself:

Are any of the following problems of great concern: software reliability, security, slow release cycles, buggy software in the hands of customers?
An eager salesperson might look for an obvious “yes” to profoundly exclaim “you need static source analysis!” However, you really should figure out how much of a problem these things are, and what is causing them. For instance, if slow release cycles are a problem, is that because quality assurance (QA) and developers spend a lot more time finding and fixing software bugs than you originally predicted, or is something else a problem (e.g. server that all of the tests run on keeps crashing)? If it’s the former, your static analysis (SA) tools may help you. If it’s the latter, you might want to have a heart to heart talk with your system administrator and order some new servers.

Are software reliability and/or buggy software in the hands of customers important?
If software reliability is an issue, refer to your database of incidents. If you don’t have one, start one right now. It’s amazing what you can learn about your own software once you let other people use it and comment on it (a.k.a complain). See what types of problems are most common. If bugs in software rank very high, figure out what types of problems are causing it and how they are being resolved.

Is security important?
This one is harder to detect through a database of incidents. If you have had security problems reported, that on its own is not good. But, if you are the type of person who sees the silver lining in everything at least the good news is that you know you have security problems that you should be solving.

If you haven’t had security problems reported, it pretty much doesn’t mean anything. You need to look at whether there is any potentialof having your software attacked, and what is it that you are doing to prevent it. For instance, if your source base is large, and no one in your organization truly understands how it all ties in together, chances are you may have code paths that have security holes in them. Using static analysis may be a good idea. Security is a much larger topic, and static analysis is a part of a much larger set of methods that you need to be instituting when designing and implementing secure software.

What are the types of bugs that you commonly see?
While SA tools can largely differ in the types of defects that they detect and how well they do it, there is a common set of problems that most of the tools will be able to do. Most of them will detect things like NULL-pointer dereferences, resource leaks, uninitialized variables, buffer overflows, etc. If you have no idea what those things mean, find some whitepapers and brochures that SA tool providers will be more than eager to give you. Alternatively, just Google for it – these are problems that C/C++ programmers face all the time, with volumes of text having been written on the subjects.

What has been the most expensive bug you have encountered?
This is a loaded question, because it forces you to quantify the importance of bugs. Every product will have that one component that everyone knows it’s riddled with bugs. It was probably developed as a prototype over a weekend by an engineer who has long ago left the company, and today it’s just a quagmire or half-baked ideas that by some miracle still makes it into the release. Chances are, the perception that your engineers have is that this product is in serious need for improvement and should be at the top of the priority list, while your customers probably don’t even know it exists or have learned how to get around without it. By knowing what bugs are actually causing you to lose money, make your customers upset, and slow down releases, you can make much more informed decisions and what bugs actually need to be addressed. Then look to see if the costly bugs match defects that SA tools can detect.

Have you tried static analysis tools before (and been disappointed)?
There are many reasons why SA tools might not have worked for you in the past. Where the tools not good at finding defects? Did you have problem getting programmers to use them? What were those problems (false positive rate, execution time, portability, bad user experience)? How often would SA tool be run (all the time, once per week, before release, never)? If you stopped using it, how long ago was it? Since then, did you detect any difference in software qualify, speed of releases, etc?

Remember, SA tools vary greatly in what they can detect, how they integrate with your build environment, how fast they execute. If you decide based on other questions that you need SA tools, how will you prevent the rejection from occurring this time around?

Step 2: Develop Evaluation Criteria
If you have gone through the first step and concluded that some sort of static analysis tools should be able to help you in significant ways, it is time to figure out what it is that you actually need from a solution.

The goal here is to come up with the evaluation criteria that you can use when later evaluating trial versions. The important thing to remember is that with static analysis, there is no such thing as “one size fits all.” The process of choosing it is very tightly coupled with the problems you are trying to solve. These are the factors you should consider:

Trial availability
If you can’t see the tool run on your code, you may as well stop here. The effectiveness of an SA tool is hard to measure through a check box list because static analysis can mean so many different things, and SA tools can vary a lot. There are no clear quantifiable measures to compare apples to apples. The best approach you can take is to give each tool a spin on your sources, and look for their effectiveness.

Most SA tool vendors will at least show you the tools in action on your code when they make a site visit, but some will not want to leave the results behind for the fear of losing the sale (thinking that once you have the results, you would have no need to purchase the product). You should work with the vendors to retain some piece of information that you can use for later analysis of your other factors.

Price
This one seems to be one of the most popular factors. It makes certain amount of sense: since a lot of organizations will adopt SA tools to improve the efficiency and get to market faster, it is important to figure out whether the cost will outweigh the benefit. Price might be of less importance to people looking for higher security, since static analysis in that case is not just a process optimization, but a quality enhancement.

Pretty much what this boils down to is doing the ROI (return-on-investment) study.

SA tool vendors might use ROI information from studies done with other companies to help support their claims to buy their product. You need to be very careful when you step into these waters. What worked for one organization might be completely different for another. You will have a much better idea of what applies to you if you do ROI on your own process.

Here is an example of where you may start: Many studies have attempted to estimate the cost to produce and deliver software to market. It is estimated that it cost $1000 to develop each line of code on the space shuttle. Developing software to the stringent DO- 178B Level A standard (for critical aircraft systems) has been estimated at hundreds of dollars per line. On the lower end, Red Hat Linux has been estimated to cost $33 per line of code. Other estimates generally place the cost of good quality commercial software in the range of $30 to $40 per line of code.

Yet other studies have estimated how this development time is spent. Most concur that more than half of software development time is spent debugging: identifying and correcting software defects. If we use an estimate of $30 per line of code in total cost, this means that organizations conservatively spend $15 to debug each line of code. If static analysis can eliminate a portion of that code, you can come up with reasonable return gains.

If the task of doing ROI seems daunting, remember that all you need to do is collect some random sample information on the cost of fixing bugs. That should give you a pretty good approximation of where you stand.

Since price and cost can be measured in some many different ways, it is easy to get sidetracked and fall into a trap where you just do number crunching. Moreover, a good salesperson will always have a way to present their numbers better than the competition. Remember, since there is so much variation on the effectiveness of different SA tools, you should leave this item for later analysis. Some other factor might eliminate a choice a lot more easily.

True Positive Rate
Your SA tools should find significant number of errors to make them worthwhile. The higher is better. The whole reason you are considering adopting the tools is to solve the problems you have identified in Step 1.

The best way to measure this is to turn SA tool lose on some of your sample code taken directly out of your product. You may want to purposely include back in some of the bug fixes, for which you speculated during Step 1 that SA tools should be able to catch.

If you are considering SA tools for higher security, you will want this metric to be extremely high. You will want to see anything that everything that could possibly go wrong, as long as it is consumable, even at the expense of giving you too much information.False Positive Rate
This is metric that is often cited,and even more often misused. It is supposed to be fairly black andwhite. The biggest problem in measuring this metric is figuring out whatis a defect. For instance, some people will consider a defect a pieceof code that if executed will definitely cause an error. Others willconsider a defect any piece of code that could potentially cause anerror, perhaps because the function does not correctly enforce an APIassumption.

The only true way to measure this is similar to TruePositive Rate, which is to try it out on something of your own. Thefalse positive rate is supposed to be low, and it seems to be at oddswith having a high True Positive Rate. Good SA tools will try to makethem as orthogonal as possible (i.e. have both high True Positive andlow False Positive rates).

Depending on your needs, you mayafford to have higher false positive rate for the sake of making surethat the true positive rate is also high – for instance, if security isan important issue.

Speed of execution
Not manySA tool vendors advertise this metric, and instead concentrate a lotmore on their highly advanced defect detection algorithms in theirmarketing brochures.

First thing to realize is that when it comesto fixing bugs, the longer it takes to detect a bug in the code afterit was written, the longer it will take to fix the bug once it isdetected. Why is that? For the simple reason that developers are human,and there is only so much information they can keep in their short termmemory.

Ideally, SA tools should be run every time any of yourdevelopers try to build the program they are modifying. Why is thisproblem with some SA tools? Well, most modern SA tools will takesignificantly longer to operate than it takes to do a build. Dependingon the tool, the analysis time may be anywhere from about the same asthe build time to tens or hundreds of times longer.

Inter-procedural and inter-module defect detection
Somemodern commercially available SA tools will be able to detect defectsthat are caused by interactions of two or more pieces of code that spanfunction and file boundaries. These are known as “inter-procedural” and“inter-module” capabilities.

A common set of problems will becaused by the interaction between two separate pieces of code that eachon its own look correct, but when combined on the same execution pathcan cause a defect. A common example is one function allocating somememory, which is later passed to another function that uses it.

Thisgoes back to the types of bugs that are causing you costly errors. Youmay find that only tools that have these features will suffice in yourcase.

Configuration cost
Since SA tools operateon source code, they need to pretty much replicate what your buildsystem is doing when analyzing source code. Some organizations havebuild systems that rival operating systems in complexity, and require asmall army of engineers to maintain. The last thing you need is an SAtool that will take forever to configure and use effectively. So, lookfor tools that will integrate easily into what you currently have.

Anotherthing to consider about the configuration cost is your time spentevaluating trial versions. If you line up four or five trial versionsand each one takes a week of someone’s time to configure, you just lost amonth of engineering time.

Integration concerns, Ecosystem, and Future Directions
SAtools will always need to integrate into some larger build system. Wehave seen organizations have multiple other tools that analyze theirsources, like the compiler (obviously), coding standard enforcers (likeMISRA), unit test generators, code metric tools, run time errordetection, etc. You should consider what other things you are currentlyusing, or have plans on using, and how will SA tools work with it. Ifthe tool you are considering offers things other than static analysisbut also addresses your problems, you may be able to save yourselfconsiderable integration costs.

Once you adopt static analysistools, you will likely not want to change very often, due to theadoption learning curve, cost of new evaluations, risk factors, etc. Youshould plan on choosing a solution that will work for a long time,especially since the static analysis tools are still being heavilydeveloped and enhanced with each new release.

Step 3: Evaluate a Trial Version
The main goal in evaluating a trial version is to get as much information as possible for the questions raised in Step 2.

Youwill want to designate someone on your team who can run the analysis,especially if you are trying more than one version. You will wantconsistency between the trials.

Most SA tools will require somelevel of configuration to set it up correctly. If you try them “out ofthe box”, you may be disappointed with the type of things that theydetect. Often times, tools will have constructs to “learn” what specialfunctions do in your code. For instance, if a function never returns, SAtools will want to know about that. Or if a function allocates a pieceof memory, but you don’t have source for it because it comes from abinary library, SA tools will want to know about it.

It is bestif one of your engineers, who is familiar with the code base, works onit directly with a SA tool vendor representative, who is familiar withthe SA tool features. At this point, you should be noting what types ofconfiguration costs you might expect in the future.

Once you getthe report results, if you can’t retain them, at least try to analyzewhat types of true positive and false positive rates the tool has foundon your sources. Are the problems that it is finding the types ofproblems that you identified in Step 1 as costly bugs?

If you arenot happy with the types of defects that the tools find, don’tautomatically give up. It is quite likely that there is a set ofproblems that you know are causing you a lot of trouble that the toolvendor hasn’t thought of before. Remember that static analysis has a lotmore potential that what it can currently do, and that the tool vendorcan help you design features into a custom product made for you. This iswhy it is important to come into this process knowing what issues youwant to have addressed.

While you are doing the trial version,show the tools to other engineers in your organization, and see ifothers will be able to generate similar reports easily. Remember thatthe engineers have been getting their work done prior to this withoutusing the SA tools. If you make a decision without getting the feedbackfrom developers, you may find resistance to change their daily routineand embrace static analysis as a tool they regularly use. SA tools workthe best when they are used by everyone, all the time, and you don’twant to risk a large investment on something that will ultimately sit onthe shelf and collect dust.

Step 4: Overcome Growing Pains
Intheory, you have done everything you could so far to make sure thatthis is a success. However, there are still failure modes. It isunlikely that you will find static analysis tools ineffective in thetypes of defects they should detect if you followed the evaluationprocedure. Still, it doesn’t hurt to verify that tools can and doperform as the evaluation has shown.

However, it is far morelikely that the human factor of resisting to the change will rear itsugly head. Engineers are creatures of habit, which is good because itmakes them very efficient in handling complex tasks. It is sometimestruly amazing to sit behind an engineer, and watch them use a tool like adebugger effectively. They will rip through a bug report by openingwindows, jumping through files, connecting to the target, rerunning thecode, setting breakpoints, viewing variables and register, etc. Thattype of expertise doesn’t come overnight. It comes through months andyears of using a product and basically living and breathing through it.

Anew product, like an SA tool, will also take some time to learn to liveup to its full potential. If the product is not similar to what theengineers are currently used to using, the learning curve will besteeper. If the SA tool doesn’t “just work” on every build, people willstop using it. It is extremely important that once you get the tooldelivered, efforts are made to enable it for everyone.

It isnatural to expect resistance after all. If all an SA tool does is tellan engineer how bad of a programmer they are, why would they want to useit? It will probably take a while to realize that an SA tool isactually saving them time. This will require some discipline, andprobably a bit of nagging from managers.

At first, you will wantto schedule a phase that will get all of your sources completely cleanlybuilding with SA tools turned on (i.e. no defects reported). That willconsists of fixing real defects, possibly changing source code to avoidcertain defects, and using various configuration options of the tools towork around other false positives.

The biggest obstacle you willface here are the false positives. After using the tool for the firsttime, some engineers will skim through a few defects, find some thatthey think are false positives, and complain how the tool is useless.Take caution: a lot of times a defect that seems like a false positiveafter glancing at it quickly, turns out to be an actual bug after youspend a few minutes on it. You should encourage everyone to give thetool an honest try, and include your local expert on the tool (theengineer who did the original evaluation) into the process as theadvisor. This is where an SA tool with good code browsing utilities canmake a huge difference, since it is often non-obvious why a defect isbeing reported.

When you come across actual false positivedefects, you need to show engineers how to effectively work around them.Most SA tools will have a quick way to mark a defect as a falsepositive in the source code, and it will never come back to bug youagain.

In other cases, you may encounter a false positive, butonce you look at it, you realize that something you are doing is really apoor programming practice and you just might be “getting lucky” for nothaving a bug. For instance, you may be using an uninitialized variable,but only on some code path combinations that are not possible. However,if someone comes along and makes a change that enables that combinationof code paths, it will definitely be a bug. It will take you a trivialamount of time to “fix” that potential problem, and you will be able tosleep easier at night knowing that you averted another potentialdisaster down the road.

The benefit you will gain through thisstep is both more reliable code, and a baseline that will let youmaintain clean builds going forward. Without having this clean baseline,it will become increasingly more difficult to remember which reporteddefects need to be addressed, and which ones are “ok to ignore.”

Step 5: Gain Continued Adoption
Rememberthat the positive effects of SA tools take some time to realize. Someengineers will immediately notice that they can develop bug free codemuch faster – those are usually the types of engineers who are verydiligent about testing code as they develop it. Other engineers mightthink that running SA tools all the time and fixing bugs as they goalong is a useless waste of time. But, if you consider how much code anaverage engineer will write in a day (maybe ten), and if you have onedefect for every new ten lines of code (which would be extremely high),it would take that engineer another couple of minutes to fix a bug. Thatmeans that no more than two minutes a day on average will be spentaddressing SA tool reports. There should be no excuses for not doing it.However, that requires discipline.

Consider a scenario that willhappen if you wait for days, weeks, or months between the times you runyour SA tools. If someone shows you a bug a month after it wasintroduced, first you need to figure out who introduced it, file areport, someone else will need to do the mental context switch to thatpiece of code, figure out how to fix it, make sure they don’t breakanything else in the process, do the changes, validate them, and putthem back into the product. The actual act of fixing the bug will taketrivial time, but the overhead will take hours or even days.

Ifyou run the tools only periodically, every time you will discover dozensof defects, it will take a long time to clean off the underbrush andget back to the clean state because of the overheads.

In theideal scenario, every runs the SA tools all the time. If someone showsyou a bug minutes after you have introduced it, how long will it take tofix? A few seconds, maybe a minute? You will know it is your bug, youwill have just written the sources, so it will be fresh in your head,you will not have committed the sources back to the master repositoryeliminating the need for extensive validations, and ultimately you willsave yourself and everyone else tons of time.
This is the idealscenario, and the only way to achieve it is to make sure that SA toolsare run all the time, every time someone makes a change.

Youshould look into automated ways to ensure that. First, all engineersshould have an easy way to turn on SA tools every time they build. Thatalso means that the SA tools need to be fast enough to not slow themdown in their daily routines.

Second, if you have an automatedbuild loop that runs every day, the SA tool should be a part of it, andreport problems the moment they are discovered. If there are anydefects, you should not let the build complete successfully, which willbe another way to force everyone to abide to the same standard.

Finally,you need to realize that all these steps will continually keep makingyour products more reliable, your releases faster, and your customershappier. You should look for the evidence of those benefits inretrospect, and share them among the staff. It will be a huge boost ofconfidence in the tools, which will ensure that your adoption process isa success.

Nikola Valerjev , Director of Engineering at Green Hills Software ,is responsible for managing teams that plan, design, and develop newproducts, including the DoubleCheck static source analyzer. He alsomanages teams that evaluate new and existing solutions from the userperspective. He holds a bachelor of science and a master of engineeringin computer science from Cornell University. He has been with GreenHills Software since 1997. This paper was presented at the EmbeddedSystems Conference as part of a class he taught there: “Guide toAdopting Static Source Analysis (ESC-528).”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.