In Jack Ganssle’s most recent blog, “Startcollecting metrics. Now!” he comes back to a topic he has championedon line, at Embedded Systems Conference and at the many developer meetingshe’s been invited to: the importance of using benchmarks and metrics inembedded software development .
“In engineering, ” he writes, ”when someone says things aregetting better, the logical rejoinder is ‘how much better and what arethe error bands? ’ There's much we cannot understand in any useful wayunless expressed numerically. “
Collecting numbers on software bugs and other areas of concern is hardto do, tough to interpret and there may be little obvious correlation betweenthe metric used and the quality of the software. If it is not obvious doesnot mean there is no problem.
But not doing so, writes Jack, is an excuseto measure nothing. And without metrics, all developer is left with is avague sense that things are better. “But a ‘vague sense,’ is not engineering, ”he writes.
It’s a tough job but someone has to do it. To help you out, Jack haspromised that in future columns he will come back and provide more detailon “some metrics that have been proven to be critical .” In the meantime,in this week’s Tech Focus newsletter are a number of past blogson various benchmarks and metrics he’s used. In addition, in the newsletterthere are several recent design articles on the topic should be useful, including:
“Softwareperformance engineering for embedded systems, ” in which Freescale’s Robert Oshana details the importance of collecting software metrics of alltypes, the various methods by which to do it and the kind of performanceenhancements that can be achieved.
“Debuggerperformance matters: The importance of good metrics, ” where AndersonMacKay of Green Hills Software writes about the importance of collectingand assessing debug metrics and properly interpreting and implementing theresults.
Some other design articles on Embedded.com that deal with the topic ofsoftware metrics and benchmarking include:
“Gettingdisciplined about embedded software development,” by Jack Ganssle
“Meantime between failure made easy,”
“Functionalcoverage metrics — the next frontier,”
“Evaluatingthe performance of multi-core processors,” and
“Choosingand using benchmarks,”
If you want to delve deeper into software metrics methods and techniques,included in the Resources Around the Network section of this week’sTech Focus newsletter are half a dozen recent technical journal and conferencepapers including:
“Athree-layer model for software engineering metrics,” in which the authorsdescribe a method by which to capture the fundamentals of software metricswithin a unifying framework and via a toolbox provide a menu of softwaremetrics easily accessible for various applications.
“Findingcausal links between software metrics and bugs,” in which the authorsanalyze some of more robust methods of determining causal links betweenvarious software metrics techniques and the occurrence of code bugs, as wellas how these links can be used as reliable predictors.
Finally, I repeat here the same questions Jack asks in his blog: “Whatdo you think? Do you measure anything, and if so, what? ” I am sure Jackwould like to hear from you. So goto his column and leave a comment or two.
I’d like to also hear fromyou, especially if you have something more to say and want more room to doit in the form of a blog or a design article on Embedded.com. If so, contactme at the email address below, and I’ll work with you to make it happen.
Embedded.com Site Editor Bernard Cole is also editor of thetwice-a-week Embedded.comnewsletters as well as a partner in the TechRite Associates editorialservices consultancy. He welcomes your feedback. Send an email to , or call928-525-9087.