The importance of measuring things
In Jack Ganssle’s most recent blog, “Start collecting metrics. Now!” he comes back to a topic he has championed on line, at Embedded Systems Conference and at the many developer meetings he’s been invited to: the importance of using benchmarks and metrics in embedded software development.
“In engineering,” he writes, ”when someone says things are getting better, the logical rejoinder is ‘how much better and what are the error bands?’ There's much we cannot understand in any useful way unless expressed numerically. “
Collecting numbers on software bugs and other areas of concern is hard to do, tough to interpret and there may be little obvious correlation between the metric used and the quality of the software. If it is not obvious does not mean there is no problem.
But not doing so, writes Jack, is an excuse to measure nothing. And without metrics, all developer is left with is a vague sense that things are better. “But a ‘vague sense,’ is not engineering,” he writes.
It’s a tough job but someone has to do it. To help you out, Jack has promised that in future columns he will come back and provide more detail on “some metrics that have been proven to be critical.” In the meantime, in this week’s Tech Focus newsletter are a number of past blogs on various benchmarks and metrics he’s used. In addition, in the newsletter there are several recent design articles on the topic should be useful, including:
“Software performance engineering for embedded systems,” in which Freescale’s Robert Oshana details the importance of collecting software metrics of all types, the various methods by which to do it and the kind of performance enhancements that can be achieved.
“Debugger performance matters: The importance of good metrics,” where Anderson MacKay of Green Hills Software writes about the importance of collecting and assessing debug metrics and properly interpreting and implementing the results.
Some other design articles on Embedded.com that deal with the topic of software metrics and benchmarking include:
disciplined about embedded software development,” by Jack Ganssle
“Mean time between failure made easy,”
“Functional coverage metrics -- the next frontier,”
“Evaluating the performance of multi-core processors,” and
“Choosing and using benchmarks,”
If you want to delve deeper into software metrics methods and techniques, included in the Resources Around the Network section of this week’s Tech Focus newsletter are half a dozen recent technical journal and conference papers including:
“A three-layer model for software engineering metrics,” in which the authors describe a method by which to capture the fundamentals of software metrics within a unifying framework and via a toolbox provide a menu of software metrics easily accessible for various applications.
“Finding causal links between software metrics and bugs,” in which the authors analyze some of more robust methods of determining causal links between various software metrics techniques and the occurrence of code bugs, as well as how these links can be used as reliable predictors.
Finally, I repeat here the same questions Jack asks in his blog: “What do you think? Do you measure anything, and if so, what?” I am sure Jack would like to hear from you. So go to his column and leave a comment or two.
I’d like to also hear from you, especially if you have something more to say and want more room to do it in the form of a blog or a design article on Embedded.com. If so, contact me at the email address below, and I’ll work with you to make it happen.
Embedded.com Site Editor Bernard Cole is also editor of the twice-a-week Embedded.com newsletters as well as a partner in the TechRite Associates editorial services consultancy. He welcomes your feedback. Send an email to firstname.lastname@example.org, or call 928-525-9087.