Making embedded software betterAs a fan of Jack Ganssle’s Break Points on Embedded.com, I read his columns on a regular basis, - and even go back and reread older ones - for perspective and insight into not only the technologies involved but into the thinking process necessary to build the embedded hardware and software such that the final system operates safely and reliably.
Particularly interesting to me as an editor (and thus a generalist concerned with the big picture) are his columns relating to 1) bringing more discipline and structure to the software process (Quality is Job One and Deep Agile), and 2) the kind of metrics for measuring code quality and complexity (Taming software complexity and Perfect Software).
As a regular reader and re-reader of his contributions, he’s so sensitized me to such issues that I’ve I set up the various network based search services, such as Google, and a variety of more dedicated tools to monitor the World Wide Web for articles, news and about these topics.
Recently several of these search queries turned up a reference to the same book: “Making Software What Really Works, and Why We Believe It,” edited by Andy Oram and Greg Wilson. It is a compendium of contributions by a range of software engineering specialists in academia and business writing about software development as an engineering process.
At about 525 pages with 30 articles, there is a lot to read, but I am slowly working my way through the various contributions. This is helped a bit by the fact that I purchased an ebook version and downloaded it to my Kindle where I can in free moments read something (or listen to it using Kindle’s text to speech feature) .
A list of some of the topics covered is indicative of its breadth: Why is it so hard to learn to program? Do we need more complexity metrics? How effective is test-driven development? Open source versus proprietary software. Modern code review. The evidence for design patterns. The art of collecting bug reports. Where do software flaws come from?
No matter what the specific focus of their contributions all of the writers in this book address in various ways a common set of questions: How do people learn to program? What is the best way to teach programming? Does the choice of programming language affect productivity? Can the quality of code be measured? What are the best ways to predict, let alone find, the location of software bugs? Is it more effective to design code in detail up front or to evolve a design over time in reaction to the accretion of earlier code? What is the best way to learn programming? Is software developer accreditation practical?
Several of the contributors have focused on software complexity and ways to measure it as one way to answer some of these questions. The one metric most familiar to embedded developers is the use of cyclomatic complexity, which as noted earlier is a topic that Jack Ganssle revisits from time to time and which have been the subject of a few design articles on Embedded.com.
Beyond that, there are many metrics – several hundred according to contributors to this book – including Maurice Halstead’s complexity measures. I am not sure yet how useful some of these approaches would be to embedded software developers. But Halstead’s approach is intriguing. It draws from the fundamentals of information theory and generates complexity estimates based on a number of easily measured aspects of the code: the number of distinct operators and operands, and how easy they are to discriminate.
Also addressed in the book is the trade-off between the use of proprietary and open source software such as Linux as it relates to code quality and software security. The common wisdom among its adherents is that proprietary software, developed according to carefully detailed plans and goals is better than open source development developed in a much more freeform and ad hoc basis.
Two contributors seem to think that based on their data analysis there is no difference, especially as it refers to security. I suspect that the much of the problem figuring out which is best is moot at this point as most hardware and software companies, in the embedded industry at least, do both. So if the discipline and focus that comes with a corporate focus is a factor, it would be applied to all software projects, whether open source or proprietary.
If this book has a has a drawback, it is that most of the contributions are, with a few exceptions, not written by the participants in the software development process, but rather by academic and corporate specialists in statistics, data mining, sociology and behavioral psychology looking at the issues from the outside.
For a complete picture I think the perspectives of both should be considered. Many of you - who are on the inside working in the trenches – have perspectives that might bring clarity to the issues raised by the authors in this book. Any takers? If you want to contribute your thoughts and analysis on such issues, feel free to call me with your ideas for articles or blogs. Or provide a brief comment below.
Embedded.com Site Editor Bernard Cole is also editor of the twice-a-week Embedded.com newsletters as well as a partner in the TechRite Associates editorial services consultancy. He welcomes your feedback. Send an email to firstname.lastname@example.org, or call 928-525-9087.