Capers Jones is one of the most prolific software researchers around. He has a vast collection of metrics from many thousands of real-world projects. His book The Economics of Software Quality is rather dry but full of fun facts. He’s careful to point out that the data has huge standard deviations, but it does paint some interesting pictures about the nature of software engineering.
Chapter 2 is called “Estimating and Measuring Software Quality.” Though all of the numbers are interesting those for requirements are especially so.
Mr. Jones is adamant that we shouldn’t use lines of code (LOC, or KLOC for thousands of lines of code) as a metric. He prefers function points. While it’s hard to dispute his arguments in favor of function points, few practicing developers understand them, which makes the metric a barrier to communication. Most sources figure one function point is around 100-120 lines of C code, so here I’ve converted his numbers to C code using 100 LOC = 1 function point.
Let’s look at his numbers for the size of requirements. For an application of 10 KLOC developers typically get 115 pages of requirements, which are 95% complete. That’s roughly a page per 100 LOC, or something like one line per two LOC. Of course, some percentage of those requirements would be graphical, but his numbers suggest that 75% of requirements are text. Does that mirror your experience? That’s a lot of text for a couple of lines of code.
But it gets worse as applications grow in size. At 100 KLOC figure on 750 pages of requirements, which will only be 80% complete. At 1 million lines of code there will be 6,000 pages of, probably dreary, conflicting, and poorly-specified requirements, comprising just 60% of what is expected to be delivered. In other words, those 6K pages specify just about half of the desired functionality. No wonder big systems are delivered so late. Mr. Jones makes the scary point that at 5 million LOC it would be impossible for a single person to read the requirements in one lifetime!
Small systems don’t experience much requirements churn; for projects of 10 KLOC figure on 1% growth/month, or about 225 LOC change over the duration of the project. At 1 million LOC that bumps to 1.25% growth/month resulting in an extra almost 300 KLOC.
What about requirements defects? A 10 KLOC system will have 75 of those, of which 8 are typically delivered to the customer. At 1 million LOC that jumps to over 10,000 of those kinds of defects, 2000 of which will wind up in the user’s hands.
Now, these numbers are for all sorts of software projects. He points out that embedded projects experience only 20% of the defects of the numbers he presents. That’s still 400 delivered requirements defects for a big project. Of course, there will be plenty of other bugs in the final project from other phases of development, but those numbers are fodder for a different column.
One of the rationales for agile methods is, as Kent Beck puts it, everything changes all of the time. One can’t know, it’s reasoned, what the customer wants so it makes sense to deliver early and incrementally. I’ve always agreed with the conclusion, but not necessarily with the thesis. Though there are exceptions, in my experience most embedded projects can get a decent, if imperfect, set of requirements early in the project. Admittedly, it can be hard to elicit them, but “hard” is no excuse for abdicating our responsibility to work diligently to clarify our goals.
Given that firmware has only 20% of the requirements defects experienced by other sorts of software, two things jump out. First, we’re doing a heck of a job! Second, the notion of using agile to elicit requirements probably doesn’t make sense in this space (though implementation may benefit from agile ideas). Traditional approaches seem to be working pretty well, and to determine requirements in an agile, incremental, manner demands an awful lot of customer participation. eXtreme Programming practitioners require an on-site customer; in recent years some have suggested than an entire customer team be co-located with the developers to wring out desired product functionality. That’s unrealistic for many projects.
As regular readers know I think careful engineering requires the use of metrics to understand both what we’re building, as well as how we’re building it. Do you track anything about requirements, like size, churn or defects?
Jack G. Ganssle is a lecturer and consultant on embedded development issues. He conducts seminars on embedded systems and helps companies with their embedded challenges, and works as an expert witness on embedded issues. Contact him at email@example.com. His website is www.ganssle.com.