Code: Getting it right
How good is your code?
Most of us have no quantitative idea. A few outfits measure bug rates, but, as been often observed, an apparent absence of bugs does not mean the code lacks defects.
While safety-critical code is often better-tested, and sometimes better-designed, than average industrial software, testing cannot prove the absence of bugs. According to .Adams, N.E., "Optimizing preventive service of software product," IBM Journal of Research and Development, 28(1), p. 2-14, a third of all software faults take more than 5000 execution-years to manifest themselves. That's encouraging for those of us not planning to challenge Methuselah, but graphically illustrates the inadequacy of testing.
Some operating systems are certified to extremely-high levels of reliability and/or security. To my knowledge none have been completely proven using the most rigorous formal methods (some, like Green Hills' Integrity, certified to EAL6+ against a particular protection profile, come close). Until now. According to a news item on their site, NICTA's Secure Embedded L4 microkernel has been completely proven correct using formal methods. That's quite an achievement.
But more interesting is the stunning cost required to make the proofs. The OS comprises 9,300 lines of code, of which 7,500 have been verified. The cost of verification: 25 to 30 person years of work! At a loaded cost of $200k/person year, that's over $5m, or about $700 per line of code. Since most firmware runs $20-40 per line of code (this covers the entire cost of the project), formal verification is twenty times the cost of writing the stuff in the first place.
Actually, $700/LOC is cheaper than one might think. Apparently, Green Hills spent about $1000/LOC to achieve their EAL6+ validation, which is not as strong as a proof as what NICTA claims. NICTA themselves believe EAL6 certification would run about $10,000/LOC. That's quite a range; even at the low end it's gobs of cash.
They found 144 bugs while formally proving the OS. That's about a 2% rate, which suggests very little testing was used prior to the proof.
So- how good is your code? How do you insure it's close to being right?
(Update: Green Hills tells me INTEGRITY was evaluated using formal methods, so my statement that their validation is not as strong as a proof as what NICTA claims is incorrect.)
(Editor's Note: Jack's Embedded Poll question this week is "How do you do bug metrics?" To vote, go to the Embedded.com Home Page.)
Jack G. Ganssle is a lecturer and consultant on embedded development issues. He conducts seminars on embedded systems and helps companies with their embedded challenges. Contact him at firstname.lastname@example.org. His website is www.ganssle.com.