Advertisement

Software for dependable systems

November 01, 2009

Jack Ganssle-November 01, 2009

The average developer reads one trade journal a month and but a single technical book a year. This implies that most of us feel the industry isn't changing much or that we don't care to stay abreast of latest developments. I shudder to think the latter and know for a fact the former just isn't true.

Another excuse is that too many tomes are so dry that even a gallon of Starbuck's most potent brew won't keep the eyes propped open. And sometimes wading through hundreds of pages yields very little new insight.

But recently I came across one of the most thought-provoking software books I've read in a long time. At 131 pages, it's not too long, and a PDF is available on the web (www.nap.edu/catalog.php?record_id=11923). Titled Software for Dependable Systems--Sufficient Evidence?, the book was edited by Daniel Jackson, Martyn Thomas, and Lynettte Millett and is the product of the Committee on Certifiably Dependable Software Systems. Not only is this volume utterly fascinating, it's incredibly well-written. I had trouble putting it down.

I often rant about poor quality code haunting us and our customers. Yet software is one of the most perfect things humans have created. Firmware, once shipped, averages a few bugs per thousand lines of code. But software is also one of the most complex and fragile of all human creations. In school, and in most aspects of life, a 90% is an A. In software, a 99.9% is, or may be, an utter disaster.

Perfection is not a human trait. Software is made by people. So can software ever be perfect?

Maybe not. But there's no question it has to meet standards never before achieved by Homo sapiens. Software size has followed a trajectory parallel to Moore's Law, and complexity grows faster than size. The projects we'll be cranking out in even a decade will dwarf anything made today and in many cases will be in charge of much more dangerous systems than now. Think steer-by-wire in hundreds of millions of cars, or even autonomous driving down Route 95.

Software for Dependable Systems tackles the question of how can we know if a system, and in particular the software, is dependable? When we let code loose on an unsuspecting public, how much assurance can we offer that the stuff will run correctly, all of the time?

Currently, engineers building safety-critical applications typically use standards such as DO-178B or IEC 61508 to guide the development process. These are prescriptive approaches that mandate certain aspects of how the software gets built. For instance, at the highest level of DO-178B MC/DC (Modified Condition/ Decision Coverage) testing is required. MC/DC hopes to ensure that the code is totally tested. Seems like a great idea, but there's little evidence about how effective it really is.

So why has software for avionics been so successful? Some believe that the safety culture engendered by companies employing very detailed and difficult processes leads to a company-wide intense focus on making things right.

The agile community promotes people over process. Most certification standards take the opposite tack. Software for Dependable Systems stresses the importance of both process and people. But the book goes further and expresses the conviction that software will always be of unknown quality--which is scary for a safety-critical application--unless there's positive proof it is indeed correct.

The book makes a number of suggestions, all of which are valuable. But its most important message is a three-pronged strategy about evaluating the system. Note the use of the word "system;" continually stressed is the idea that software does not exist in isolation, it's part of a larger collection of components, both hardware and human. A program that functions perfectly is utterly flawed if it demands superhuman performance from the user--or even human-level performance in a high stress situation. I was reminded David Mindell's Digital Apollo, which describes how the spacecraft, ground controllers, and astronauts were a carefully designed single integrated system, and one of the biggest problems faced by the engineers was balancing the role of each of those components in the larger Apollo structure.

< Previous
Page 1 of 4
Next >

Loading comments...