Software engineering metrics we need - Embedded.com

Software engineering metrics we need

Engineering is about numbers; firmware people need to collect metrics.

In a recent article (“Start collecting metrics now”) I stressed the importance of collecting metrics to understand and improve the software engineering process. It's astonishing how few teams do any measurements, which means few have any idea if they are improving, or getting worse. Or if their efforts are world class, or constitute professional malpractice.

Two of the easiest and most important metrics are defect potential and defect removal efficiency. Capers Jones, one of the more prolific, and certainly one of the most important, researchers in software engineering pioneered these measurements.

Defect potential
is the total number of bugs found during development (tracked after the compiler gives a clean compile; ignore the syntax errors it finds) and for the first 90 days after shipping. Every bug reported, every mistake corrected in the code, counts. Sum this even for those that Joe fixes while he is buried in the IDE doing unit tests. No names need be tracked; this is all about the code, not the personalities.

Defect removal efficiency is simply the percentage of those removed prior to shipping. One would hope for 100% but few achieve that.

These two metrics are then used to do root cause analysis: Why did a bug get shipped? What process can we change so it doesn't happen again? How can we tune the bug filters to be more effective?

Doing this well typically leads to a 10x reduction in shipped bugs over time. Here's some data from a client I worked with:


Click on image to enlarge.

Over the course of seven quarters, they reduced the number of shipped bugs by better than an order of magnitude by doing this root cause analysis.

What are common defect potentials? They are all over the map. Malpractice is when we ship 50 bugs/1,000 lines of code (KLOC). 1/KLOC is routinely achieved by disciplined teams, and 0.1/KLOC by world-class outfits.

According to data Capers Jones shared with me, software in general has a defect removal efficiency of 87%. Firmware scores a hugely better 94%. We embedded people do an amazingly good job. But given that defect injection rates run 5 to 10%, at a million LOC 94% means we're shipping with over 3,000 bugs.

What are your numbers? Do you track this, or anything?

Jack G. Ganssle is a lecturer and consultant on embedded developmentissues. He conducts seminars on embedded systems and helps companieswith their embedded challenges, and works as an expert witness onembedded issues. Contact him at . His website is.

5 thoughts on “Software engineering metrics we need

  1. You have to be careful how you measure and how you use those measurements.

    As soon as anyone has their performance/ego ties to measurements they will start to game the system and you not get the results you wanted.

    I've worked in places where these metri

    Log in to Reply
  2. If I knew what metrics were I might understand this article. You'd have thought with Americans using such arcane measurement systems it would be imperialics – whatever it means…

    Log in to Reply
  3. Actually just recently I confirmed the importance of doing some metrics measurement. Software development is a lot of a process and in any process you have to have a way to observe it or measure it and a way to control it or fine tune it.
    A bug is found

    Log in to Reply
  4. Jack, In the paragraph defining Defect Potential, it states '(tracked after the compiler gives a clean compile; ignore the syntax errors it finds)'. What definition of 'syntax error' is being used that would allow a clean compile?

    Log in to Reply
  5. By this I mean that one should not count errors detected by the compiler. Compile the code, fix all of the problems it finds, and count all subsequent errors.

    Log in to Reply

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.