A window into software Q/A in the automotive industry.
As we were preparing this month's issue for publication, the U.S. National Highway Traffic Safety Administration released an enormous report on its investigation into unintended acceleration in Toyota cars. It all seems very familiar, if you remember the Audi 5000. New electronic throttle control reaches the market. Reports of failure emerge from a few users. The vendor denies any problem. A few engineers quietly report having reproduced the problem. But intensive publicly-funded investigation finds nothing.
What makes this report particularly interesting is that the NHTSA called in an evaluation team from NASA to do the heavy lifting. And that team included a software evaluation group. While the hardware folks were shaking, baking, and irradiating cars and car parts, the software team had at the Engine Controller Module code for the four-cylinder 2005 Camry—all 280K lines of ANSI C. The team's report (www.nhtsa.gov/UA) could be a case study for Mark Pitchford's cover story.
NASA's team applied static source-code analysis, formal logic model checking, and algorithm analysis through simulation. The report states “The team's experience is that there is no single analysis technique today that can reliably intercept all vulnerabilities, but that it is strongly recommended to deploy a range of different leading tools.”
For code analysis, the team used Coverity, CodeSonar, and Bell Labs' Uno to identify “common coding defects and suspicious coding patterns.” They also used CodeSonar to compare Toyota's code against a Jet Propulsion Lab coding standard.
For model checking, the team used open-source Spin and Swarm. Here the tale gets more interesting. To use a formal model checker, you first have to write formal models. The team decided to build models only for those software modules they believed could be culprits—so the formal analysis depended upon human judgment of possible fault modes.
The algorithm analysis started with—once again—building models, this time in Matlab. This process started with reading Toyota documentation and talking with Toyota engineers, and then progressed to analyzing the source code and finally testing the models against actual Camrys. Once the NASA team was satisfied with the models, they explored failure scenarios in Simulink and checked delays with AbsInt aiT.
Some conclusions suggest themselves. First, there are no silver bullets: effective debug means using everything you've got. Second, even when it's grounded in exhaustive and formal techniques, an evaluation is circumscribed by the evaluators' beliefs about the possible behavior of the system. Third, there is no certainty. Despite Toyota's great care in developing their code, NASA's analysis found significant errors, including serious underestimates of delays in the multiprocessing system. But the investigation could not link those errors to any proposed mechanism for unintended acceleration. Contrary to what you probably read in the papers, the NASA Executive Summary stated “Because proof that the ETCS-i caused the reported UAs [unintended accelerations] was not found does not mean it could not occur.”
Ron Wilson is the editorial director of Embedded.com, EDN, and the Designlines at UBM Electronics. You may reach him at firstname.lastname@example.org.