Know Your Defect Numbers

Do you know your how many defects will be injected into the nextproduct during development? Or how many of those your team will fixbefore releasing it to the customer?

If not why not?

In this month's issue of Crosstalkthe always interesting Capers Joneswrites: “All software managers and quality assurance personnel shouldbe familiar with these measurements because they have the largestimpact on software quality, cost, and schedule of any known measures.”

Jones is one of the world's authorities on software engineering andsoftware metrics. In this current article he makes it clear that teamsthat don't track defect metrics are working inefficiently at best.

He suggests we track “defect potentials” and “defect removalefficiency.” Though the article defines the former term a betterdescription comes from his article “SoftwareBenchmarking”  in the October 1995 issue of IEEE Computer:”The defect potential of a software application is the total quantityof errors found in requirements, design, source code, user manuals, andbad fixes or secondary defects inserted as an accidental byproduct ofrepairing other defects.”

In other words, defect potential is the total of all mistakesinjected into the product during development. Jones says the defectpotential typically ranges between two and ten per function point, andfor most organizations is about the number of function points raised tothe 1.25 power.

Function points are only rarely used in the embedded industry; wethink in terms of lines of code (LOC). Though the LOC metric is hardlyideal we do our analysis with the metrics we have. Referring once againto Jones' work, his November 1995 article “Backfiring:Converting Lines of Code to Function Points” in IEEE Computerclaims one function point equals, on average, about 128 lines of C,plus or minus about 60..

“Defect removal efficiency” tells us what percentage of those flawswill be removed prior to shipping. That's an appalling 85% in the US.But in private correspondence he provided me with figures that suggestembedded projects are by far the best of the lot, with an averagedefect removal efficiency of 95%, if truly huge (1 million function points ) projectsare excluded.

Jones claims only 5% of software organizations know their numbers.Yet in the many hundreds of embedded companies I've worked with, onlytwo or three track these metrics. Since defects are a huge cause oflate projects it's seems reasonable to track them. And companies thatdon't track defects can never know if they are best in class or thevery worst, which, in a glass-half-full way suggests lots ofopportunities to improve.

What's your take? What metrics does your company track?

Jack G. Ganssle is a lecturer and consultant on embeddeddevelopment issues. He conducts seminars on embedded systems and helpscompanies with their embedded challenges. Contact him at . His website is .

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.