The problem with static analyzers - Embedded.com

The problem with static analyzers

Static analyzers offer a lot of capability. They could easily go a lot further.

At the recent Embedded Systems Conference in Silicon Valley I had the chance to talk to several vendors of static analyzers. These are the tools that evaluate your program to find potential runtime problems, like variables going out of bounds or dereferences of null pointers.

Static analyzers are relatively new ideas that still have little market penetration, but that offer the chance to rid a program of a large class of bugs long before loading a debugger. Though in many cases these are still somewhat immature, I think that over the course of the next decade most of us will consider them an essential part of how we build systems.

However, a pattern is emerging that makes me think the current crop of tools are missing a valuable opportunity. Consider the following snippet:

int divide(int value1, value2)
{
   return value1/value2;
}

Very simple code, of course. At least some of the current static analyzers will return a message saying that a divide by zero is possible, though the tool cannot predict if indeed such a case will ever occur.

To be fair, the tools are pretty smart and will not emit an error if the code looks like:

int divide(int value1, value2)
{
   if(value2!=0) return value1/value2;
}

Others do deeper analysis and will look at how the function is called, but all can get tripped up since many cases are simply not analyzable. For instance, if a calculation is based on a reading from a peripheral, none of the commercial tools can predict the possible input ranges. So they’ll issue a warning, and it’s up to the developer to insure that the code will be safe.

Why don’t the tools take this a bit further?

If there’s a chance that an error will occur if an un-analyzable input assumes some value, perhaps the tool should generate a new version of the source file annotated with an assertion that tests for the potential error condition. Pour the code into the tool and let it generate:

int divide(int value1, value2)
{
   assert (value2!=0); // WARNING! Possible error
  return value1/value2;

The upside is that the code will fail if the possible error does occur, and it’s a signal to the developer that the tool has found a limitation on the range of values a variable is allowed to assume.

This is but a trivial example, but I suspect there are a vast number of situations where a static analyzer cannot provide a definitive answer, but could generate the appropriate assertions to insure that if bad things occur at runtime and exception will be thrown.

Jack G. Ganssle is a lecturer and consultant on embedded development issues. He conducts seminars on embedded systems and helps companies with their embedded challenges. Contact him at . His website is .

1 thought on “The problem with static analyzers

  1. Static analyzers were not new in 2011 when Jack wrote this, and the concepts were not new when Lint was originally developed for C. Part of the reason we need them is that C leaves so much undefined. The IBM compiler for PL/1 gave error messages that obv

    Log in to Reply

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.