I follow a number of companies that offer tools to embedded developers.We humans are tool builders; we engineers in particular create and usetools ranging from the physical (e.g., a wrench or logic analyzer) tothe ethereal (software like compilers). Bereft of tools we'd be utterlyunable to do our jobs. Just think of our helplessness when the powerfails! Here we stare dumbly at each other for a minute or two beforegoing out for ice cream or a drink if it's late afternoon.
SmartBear Software offers a number of tools to all sorts ofsoftware developers. Their $489 Code Collaborator is an intriguingprogram that supports lightweight code reviews. Lightweight, becausethe traditional code inspection advocated by Michael Fagan and ThomasGilb doesn't fit the needs of some teams, but the benefits ofinspections are so profound that even the smallest outfits must takeadvantage of the technique.
Now Smart Bear has a book on the subject which summarizes theresults of 2500 reviews of 3.2 million lines of code at Cisco. UsingCode Collaborator, which automatically generates and preserves metrics,they were able to learn a lot about lightweight reviews in a real-worldscenario. Too many studies are set in academic communities usingundergraduate students on toy projects.
The authors rightly point out that Fagan, Gilb, et al promote ahighly formalized process that we're not supposed to tune much. Yet aFagan inspection simply isn't possible in a two-person shop. Andsometimes it's hard to match the process of the Fagan approach tomodern agile methods. Indeed, in eXtreme Programming, for better or forworse the review takes place in real-time as a pair of developers crankcode sharing a single machine.
The book refers to a number of studies, some of which are relativelyobscure. For instance, did you know that when reading a functiondevelopers repeatedly return to look at variable definitions? Theimplication is that short term memory doesn't hold a lot, so wise teamswill insist that all functions fit on a single page. Then it's easy toglance up at the declarations without shuffling through paper orscreens.
The Cisco study showed a tremendous variability in inspection ratesfor a lot of reasons. But engineers achieved the best results wheninspecting at about 300 lines of code per hour or less. And after aboutan hour review effectiveness plummets. We get tired.
Expect to find about 15 defects per hour.
Authors who “annotate” and explain the code before the review havefewer mistakes. It's not clear from the book what “annotates” means orhow this is different than decent commenting. But clear explanations tosomeone else, presumably in written form, makes the author think moredeeply and thus find his or her own problems before the review takesplace.
It's a very well-written 164 page book that's a fascinating read.Even better, “Best Kept Secrets of Peer Code Review,” is free! Go to onlineat SmartBear to order it. It's a physical volume, not a PDF whichis a pain to read on-screen and annoying to print. Yes, it promotestheir product, but the authors wisely relegated the sales hype to thelast chapter. And do read that section carefully, too. Any tool thatcan help improve your processes and reduce defect rates is worthinvestigating.
What do you think? Have you read the book? Do you do code reviews ofany sort?
Jack G. Ganssle is a lecturer and consultant on embeddeddevelopment issues. He conducts seminars on embedded systems and helpscompanies with their embedded challenges. Contact him at . His website is .
The book is only free in the USA.
$9 is not much, but it is annoying.
This is not a smart way to encourage non US customers.
– Colin Smith
I have not read the book (yet), but I always have a lot to say on this topic. I agree that Fagan inspections do not fit all environments well. In fact, I would argue that the fatigue and repetition in such a process actually makes reviewers less likely to catch more subtle problems.
However, I don't like the idea of making functions fit on one page. Breaking up complex modules, such as large finite state machines, is more a more complex task than keeping the definitions handy, and the resulting code may be nested in such a way that the code complexity goes up.
I do like “smaller” functions, and in a review I'll always note blocks I think should be pulled out. In C++ I like definitions in blocks they are used in rather than at the top.
Also in addition to the formal review that captures metrics, I am a huge advocate of pulling in an expert (e.g. in my case a software security expert, or maybe a performance expert) to look at the code for certain characteristics that a general review might not find. This can be a very informal process.
And finally, even more than code reviews I find a great way to cut down on defects is to have someone other than the author create unit tests. I have worked in shops that require the unit test to be completed by the author before the review, and that never made sense to me.
In fact, I like the first review to take place with the comments only.
I ordered the book (a great idea to promote their product by the way) and I'm anxious to read it.
– Eric Uner
I've read/heard : “wise teams will insist that all functions fit on a single page”.
So to improve productivity, we should rotate our screens 90 degrees.
– Tim FlynnUPDATE TO MY PREVIOUS COMMENT
I contacted Smart Bear and found them to be very helpful, provided prompt answers to my questions and more than happy to deal with a non-US customer.
I was willing to pay the $9 out of my own pocket. The hassle of processing this type of purchase even in a small to medium sized company is just to much of a pain.
Thanks to the people of Smart Bear for their help.
– Colin Smith