Quality is Job One

- January 24, 2006


In his article "Quality is Now Development Job One,"  author Peter Coffee reports that more and more companies are gauging software release dates by quality metrics rather than schedules. Even Microsoft announced that Vista will be held until its quality milestones are met.

Wow.

We know that poor quality – bugs – are schedule killers. The article claims that fixing defects consumes 80% of software development costs. Capers Jones showed, in a study of 4000 projects, that bugs are the biggest cause of missed schedules.

Yet in the embedded world quality is usually an afterthought. Though I often see projects held up while developers try to deal with a hundred page bug list, it's almost unheard of for management to define quality metrics that will dictate the schedule. The closest too many organizations come to managing quality is to hold a bug deferral meeting long after the deadline went swooshing by to decide the precise level of lousiness they're willing to ship.

It's rare to see management even talk about quantitative quality metrics. Yet we know that it's cheaper to build quality in, rather than attempt to retrofit it at the end.

After all, what's the fastest way to fix a bug?

Don't put it in there in the first place!

That, of course, means we need better approaches than the usual heroics. Standards, inspections, disciplined construction of test suites, complexity analysis and all of the usual well-known techniques are essential ingredients of a project which will delight the customer.

December's Crosstalk  has an article from the always interesting Watts Humphrey. In "Acquiring Quality Software" he lists six quality principles. The first is that if a customer (or management) does not demand a quality product he or she won't get one.

Well, duh. But saying "… and make it bug-free" is a meaningless statement. Bug-free is not a metric and is not measurable (see Humphrey's paper for more rational metrics). That's especially true with the poor testing endemic to this industry. Lame tests just won't find most bugs so the product appears much better than it is. Admittedly, the IT folks have a much simpler problem then we do, as many embedded systems defy automated overnight testing. Yet crummy unit tests and poorly-thought-out systems checks abound.

Contrast this with the operation of a well-run factory. Graphs posted around the floor show productivity, inventory levels, and quality goals versus actual performance. While it is hard to manage the quality of a creative endeavor like firmware engineering, we must. If we don't, we'll be shipping million line programs harboring tens of thousands of defects.

How does your organization manage and measure firmware quality?

Jack G. Ganssle is a lecturer and consultant on embedded development issues. He conducts seminars on embedded systems and helps companies with their embedded challenges. Contact him at jack@ganssle.com. His website is www.ganssle.com.
<>


How about this one? The test group devises a test and test plan that looks stringent, yet reasonable on paper. Said test plan does find and identify the bugs.

Developer team then racks collective brains to fix bugs and pass the tests.

Only to find that the bug fixes cause the system to malfunction/crash in the customer use case!

- Ken Wada


Last week I wrote a little something about Test Driven Development and how it could help reduce the risk of taking compiler or third party code releases. Where TDD really shines is keeping bugs out of code from the start. "Don't put it in there in the first place!" Jack says.

I used to work with a programmer, let's call him Joe, who would hammer in a bunch of code get it working and release it for others to use. It was pretty well designed. It usually worked well. Bug would inevitably appear. This would send Joe debugging for a couple weeks. Finally when his subsystem worked again he would announce that the problem was just a typo. Not a real problem. It still cost him two weeks, with many waiting on the result, so I beg to differ, it was a real problem. Lets give him the benefit of the doubt, lets ay it was a typo that caused the system to have a level 1 defect. Can we prevent those? I say we can prevent a lot of them with TDD.

When practicing TDD the programmer writes an automated unit test that shows the expected outcome. Joe should be thinking what behavior do I want, write a test for that behavior, and now make the system behave as the test says. For example partially fill a circular queue with characters, see if they come out in the same order. Write another test where the queue has to wrap around, and make sure the right characters come out. The do it again where the queue overflows, then underflows, you get the picture.

When programming in this style, the little mistakes are eliminated. Joe's typo is less likely to sneak into the system and take the system down. For that matter a logic error is less likely too. The debug time approaches zero. What do we do with the extra time on our hands? We get more functionality in place. Little bugs can cause big problems. My first TDD project resulted in only a few days of debug time over a six month period. I know TDD projects that do not have bug lists.

This activity also results in meaningful data. For example: the number of unit tests passing (this number should go up every week), unit test coverage (this number should continually grow, maybe close to 100%), acceptance tests defined and passing (this measures business value completed).

- James W Grenning


While I was working on a GM powertrain project at Delphi, the control software got retested every week. It was a complete retest of the same software, to ensure the weekly changes did not break the software.

My previous car was a MB 1999 C280 Sport sedan. One time the engine warning light came on but the car was fine (engine sound and power were good). The dealer replaced the radiator. It didn't solve the problem. Later they replaced engine control chips and that false alarm was gone. The engine control software was good, but the gas tank sensor hardware/software was at fault. My point is that they had software for the main important functionality of controlling the engine done perfectly, but peripheral (unimportant) sensor part was done poorly. I still miss that car, which I couldn't import to Canada due to not meeting Canadian standards. (It was bought new in Minneapolis and started at -40' without hesitation. MB Canada was just protecting the local market where prices are higher)

In comparison, my brother's 318is engine stalled frequently at traffic lights during Winnipeg winters costing him a lot to fix it. Luckily for others, BMW doesn't sell that model anymore, but the point is they didn't test it enough before they sold it in North America.

At one of my previous jobs, I was put in charge of bug tracking and firmware testing. While most people appreciated what I did, at least one of them didn't like it (I was too thorough). I was just doing my job, had no intention of puting any blame on anyone, and it contributed to better firmware earlier. Overall, that medium sized company did quite well.

To this day, I trust engines such as Pontiac Grand Prix, and Mercedes Benz; I've worked on one and driven a lot on another. One problem with testing is that the firmware developer's ego might be too big (to see testers as enemies). Another is that good testing actually is quite difficult.

- Patrick Wong


We've found our products may work fine according to a well written test plan, but fail when tested by a test harness designed to root out anomalies which are not only caused by software, but also by incorrectly implemented or defective components, adverse customer environments, or unforseen effects of EMI, for instance. The several prototypes that seemed to work well in the lab should work the same way when multiplied by thousands and deployed. The the only way to best ensure this is through careful and thorough implementation of automated testing geared to stress the system to its most extreme theoretical limits.

- Matthew Staben


I like the sounds of that TDD scheme. It reminds me of how I work. My brain is far too limited to allow it to debug whole systems of things at once so I have always broken the grand scheme down into things I can grasp and work on them until they function as envisioned. I do this both for software and hardware. I just didn't know it had a name.

- Don McCallum