Let's doff our hats to show a moment of respect for Ada, a language whose promises were huge, yet that mostly failed in the embedded market.
Though some language lawyers delight in bashing technical aspects of Ada, to me its greatest merit was the nitpicking behavior of the compilers. The rule of thumb was “if you can make the damn thing compile it will probably run.” Meanwhile legions of C programmers are, at this very moment, debugging mixed up = and == constructs, tracking down each failed malloc() and hunting for null pointer dereferences.
We C programmers manage to seed nearly an order of magnitude more bugs into our code than those working in Ada. It seems logical to use a tool that forces us to generate code that's correct, rather than to crank lots of buggy stuff out fast.
Fact is, C developers have access to a huge array of tools and techniques that can provide many of the benefits of Ada's infernally picky compilers. Decades of studies and experience confirm that code inspections, for instance, find more defects faster than any debugging strategy. So surely you inspect all new code?
Probably not. Most developers eschew inspections, citing the trouble of rounding up a group of reviewers, an aversion to yet more meetings, or plain fear that shining the cold light of day into the tangled cobwebs of our source files would be mightily embarrassing.
Surely, then, you use an array of static analysis tools, products that delve deep into the code, exploring the jungle of tangled calls and variable relationships? For example, I imagine you augment the C compiler's meager syntax checker with the full throated roar of Lint's (see www.gimpel.com and www.splint.org) steroid-enhanced analysis.
If not, why not?
Many of us advocate coding to a firmware standard. Yet it's tough to ensure we're not violating some obscure rule. Why risk myopia by squinting through reams of source listings? Do you use a tool like Codewizard or QA-C to automate standards compliance checks?
Brian Kernighan said “Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.”
Some developers tell me that static analysis tools are merely a crutch for the clueless, arguing that careful thought beats automation. Though it's true that any tool can be misused, the argument taken to its logical conclusion suggests we ignore the compiler's syntax warnings. Stop using debuggers. Ignore the spell checker's screams of agony as you torture the lexicon. Just get it right the first time, every time.
Humans are flawed creatures. We make mistakes. Static analysis finds tough problems fast. If we're professional software engineers, isn't it our responsibility to exploit every tool and technique that leads to higher code quality and that shortens debugging?
What's your excuse?
Jack G. Ganssle is a lecturer and consultant on embedded development issues. He conducts seminars on embedded systems and helps companies with their embedded challenges. He founded two companies specializing in embedded systems. Contact him at . His website is .
Agreed, static tools can reduce bugs. With a host of tools available and the pressure on time to market and themarket expects working products than perfect products (like CE devices) do we have all the time and money to investin all these tools?My experience is that only few are practical to use, some are not really worth investing time!Once we compile our code with compilers – A ZERO error ZERO warning and strict compilation options, that eliminatesneed for running other tools like, say lint.Again, my experience suggests that code inspection well done is bugs well thrashed. Static tools will not thrashout such bugs!
Rely on strict compilation and mandated code inspection.
– Saravanan T S
Good point.Unfortunately, time-to-market (TTM) is always an excuse that's good for everytime.
We use a versatile tool which does McCabe, Halstead, MISRA and other analyses, flowcharts and severallint-like checks. It is called DA-C, from
– Bruce Casner
Don't forget that the compiler itself is the definitive static analyzer, since it's the one actuallycompiling the code. If you have the source for your compiler, and can manage to familiarize yourself with it, youcan modify it to generate a wealth of debugging output. This can be handy for narrowing down problem conditionsthat may result from undesired type conversions, improper casts, and such.
Generating a custom diagnostic dump and writing a script to sequentially bring each occurrance up in your favoriteeditor can really pinpoint problem code areas quickly.
– Mark F. Haigh
Mark's point is not quite the whole story. There are plenty of forms of static analysis that a compiler _cannot_ (and indeed _should not_) attempt.
Microsoft's SLAM (now called the Static Driver Verifier) for exmaple does extremely deep model checking and theorem proving on device driver code – not something I want my compiler to attempt!
Other languages add redundant design and specification information in the form of annotations – these allow a static checker to verify all sorts of properties that a compiler can't – this approach is taken, for example, by Splint, Cyclone, ESC/Java2, and SPARK.
– Rod Chapman
(Disclaimer – I am one of the designers of SPARK…)
The static analysis tools brought up here in Jack's article are still only “light” compared to some other tools.
Look at the Polyspace verifier or the stack depth tools from AbsInt, for example. Really static analysis that does much more than even advanced Lint tools.
Sure, these are expensive tools. But so are fixing the bugs post facto. For serious software development, a few $10k for a license should not be a problem.
– Jakob Engblom
Amen, brother Jack! But do you know of a tool that will help me do the Jedi mind trick on my new boss so we can implement these tools throughout our organization? (“You *will* perform static analysis before debugging. You *will not* ignore its output.”)
I have found that making the case for these tools is difficult for the first project at a new company. Other developers resent anything that is percieved to look over their shoulder. And managers often agree, even for projects that are pegging the chaos meter. What gives?
Usually, if I stick to my guns, management will see the light after about six months. But wisdom in this area always seems to be particularly hard-won.
– John Hopson