Remember unintended acceleration? Here's what NASA should have examined in Toyota's software.
Software has become ubiquitous, embedded as it is into the fabric of our lives in literally billions of new (non-computer) products per year, from microwave ovens to electronic throttle controls. When products controlled by software are the subject of litigation, whether for infringement of intellectual property rights or product liability, it's imperative to analyze the embedded software (also known as firmware) properly and thoroughly. This article enumerates five best practices for embedded software source-code discovery and the rationale for each.
In February 2011, the U.S. government's National Highway Traffic Safety Administration (www.nhtsa.gov) and a team from NASA's Engineering and Safety Center (www.nasa.gov/offices/nesc/ ) published reports of their joint investigation into the causes of unintended acceleration in Toyota vehicles. While NHTSA led the overall effort and examined recall records, accident reports, and complaint statistics, the more technically focused team from NASA performed reviews of the electronics and embedded software at the heart of Toyota's electronic throttle control subsystem (ETCS). Redacted public versions of the official reports from each agency, together with a number of related documents, can be found at www.nhtsa.gov/UA.
These reports are very interesting in what they have to say about the quality of Toyota's firmware and NASA's review of the same. However, of greater significance is what they are not able to say about unintended acceleration. It appears that NASA did not follow a number of best practices for reviewing embedded software source code that might have identified useful evidence. In brief, NASA failed to find a firmware cause of unintended acceleration–but their review also fails to rule out firmware causes entirely.
This article describes a set of five recommended practices for firmware source code review that are based on my experiences as both an embedded software developer and as an expert witness. Each of the recommendations will consider what more could have been done to determine whether Toyota's ETCS firmware played a role in any of the unintended acceleration. The five recommended practices are:
(1) ask for the bug list;
(2) insist on an executable;
(3) reproduce the development environment;
(4) try for the version control repository; and
(5) remember the hardware.
The relative value and importance of the individual practices will vary by type of litigation, so the recommendations are presented in the order that is most readable.
Ask for the bug list
Any serious litigation involving embedded software will require an expert review of the source code. The source code should be requested early in the process of discovery. Owners of source code tend to strenuously resist such requests but procedures limiting access to the source code to only certain named and pre-approved experts and only under physical security (often a non-networked computer with no removable storage in a locked room) tend to be agreed upon or ordered by a judge.
Software development organizations commonly keep additional records that may prove more important or useful than a mere copy of the source code. Any reasonably thorough software team will maintain a bug list (a defect database) describing most or all of the problems observed in the software along with the current status of each (for example “fixed in v2.2” or “still under investigation”). The list of bugs fixed and known–or the company's lack of such a list–is germane to issues of software quality. Thus the bug list should be routinely requested and supplied in discovery.
Very nearly every piece of software ever written has defects, both known and unknown. Thus the bug list provides helpful guidance to a reviewer of the source code. Often, for example, bugs cluster in specific source files in need of major rework. To ignore the company's own records of known bugs, as the NASA reviewers apparently did, is to examine a constitution without considering the historical reasons for the adoption of each section and amendment. Indeed, a simple search of the text in Toyota's bug list for the terms “stuck” and “fuel valve” might yet provide some useful information about unintended acceleration.
Insist on an executable
In software parlance, the “executable” program is the binary version of the program that's actually executed in the product. The machine-readable executable is constructed from a set of human-readable source code files using software build tools such as compilers and linkers. It's important to recognize that one set of source code files may be capable of producing multiple executables, based on tool configuration and options.
Though not human-readable, an executable program may provide valuable information to an expert reviewer. For example, one common technique is to extract the human-readable “strings” within the executable. The strings in an executable program include information such as on-screen messages to the user (such as “Press the ‘?' button for help.”). In a copyright infringement case in which I once consulted several strings in the defendant's executable helpfully contained a phrase similar to “Copyright Plaintiff”! You may not be so lucky, but isn't it worth a try?
It may also be possible to reverse engineer or disassemble an executable file into a more human-readable form. Disassembly could be important in cases of alleged patent infringement, for example, where what looks like an infringement of a method claim in the source code might be unused code or not actually part of the executable in the product as used by customers.
Sometimes it's easy to extract the executable directly from the product for expert examination–in which case the expert should engage in this step. For instance, software running on Microsoft Windows consists of an executable file with the extension .EXE, which is easily extracted. However, the executable programs in most embedded systems are difficult, at best, to extract. Extraction of Toyota's ETCS firmware might not be physically possible. Thus the legal team should insist on production of the executable(s) actually used by the relevant customers.
Reproduce the development environment
The dichotomy between source code and executable code and the inability of even most software experts to make much sense of binary code can create problems in the factual landscape of litigation. For example, suppose that the source code produced by Toyota was inadvertently incomplete in that it was missing two or three source-code files. Even an expert reviewer looking at the source code might not know about the absent files. For example, if the bug the expert is looking for is related to fuel valve control and the code related to that subject doesn't reference the missing files, the reviewer may not notice their absence. No expert can spot a bug in a missing file.
Fortunately, there is a reliable way for an expert to confirm that she has been provided with all of the source code. The objective is simply stated: reproduce the software build tools setup and compile the produced source code. To do this it's necessary to have a copy of the development team's detailed build settings, such as make files, preprocessor defines, and linker control files. If the build process completes and produces an executable, it's certain the other party has provided a complete copy of the source code.
Furthermore, if the executable as built matches the executable as produced (actually, ideally, the executable as extracted from the product) bit by binary bit, it's certain that the other party has provided a true and correct version of the source code. Unfortunately, trying to prove this part may take longer than just completing a build; the build could fail to produce the desired proof for a variety of reasons. The details here get complicated. To get exactly the same output executable, it's necessary to use all of the following: precisely the same version of the compiler, linker, and each other build tool as the original developers used; precisely the same configuration of each of those tools; and precisely the same set of build instructions. Even a slight variation in just one of these details will generally produce an executable that doesn't match the other binary image at all–just as the wrong version of the source code would.
Try for the version control repository
Embedded software source code is never created in an instant. All software is developed one layer at a time over a period of months or years in the same way that a bridge and the attached roadways exist in numerous interim configurations during their construction. The version control repository for a software program is like a series of time-lapse photos tracking the day-by-day changes in the construction of the bridge. But there is one considerable difference: It's possible to go back to one of those source code snapshots and rebuild the executable of that particular version. This becomes critically important when multiple software versions will be deployed over a number of years. In the automotive industry, for example, it must be possible to give one customer a bug fix for his v2.1 firmware while also working on the new v3.0 firmware to be released the following model year.
Consider, for the sake of discussion, that the executable version of Toyota's ETCS v2.1 firmware that was installed in the factory in one million cars around the world had an undiscovered bug that could result in unintended acceleration under certain rare operating conditions. Now further suppose that this bug was (perhaps unintentionally) eliminated in the v2.2 source code, from which a subsequent executable was created and installed at the factory into millions more cars with the same model names–and also as an upgrade into some of the original one million cars as they visited dealers for scheduled maintenance. In this scenario, an examination of the v2.2 source code proves nothing about the safety of the hundreds of thousands of cars still with v2.1 under the hood.
Gaining access to the entire version control repository containing all of the past versions of a company's firmware source code through discovery may be out of the question. For example, a judge in a source-code copyright and trade secrets case I consulted in would only allow the plaintiff to choose one calendar date and to then receive a snapshot of the defendant's source code from that specific date. If the plaintiff was lucky it would find evidence of their proprietary code in that specific snapshot. But the observed absence of their proprietary code from that one specific snapshot doesn't prove the alleged theft didn't happen earlier or later in time.
There are some problems with examination of an entire version control repository. It may be difficult to make sense of the repository's structure. Or, if the structure can be understood, it might take many times as long to perform a thorough review of the major and minor versions of the various source code files as it would to just review one snapshot in time. At first glance, many of those files would appear the same or similar in every version–but subtle differences could be important to making a case. To really be productive with that volume of code, it may be necessary to obtain a chronological schedule provided by a bug list or other production documents describing the source code at various points in time.
Remember the hardware
Embedded software is always written with the hardware platform in mind and should be reviewed in the same manner. For example, it's only possible to properly reverse engineer or disassemble an executable program once the specific microprocessor (such as Pentium, PowerPC, or ARM) is known. But knowing the processor is just the beginning, because the hardware and software are intertwined in complex ways in such embedded systems.
Only one or more features of the hardware are enabled or active when the hardware is in a particular configuration. For instance, consider an embedded system with a network interface, such as an Ethernet jack that is only powered when a cable is mechanically inserted. Some or all of the software required to send and receive messages over this network may be not be executed until a cable is inserted. A proper analysis of the software needs to keep hardware-software interactions like this in perspective. Ideally, testing of the firmware should be done on the hardware as configured in exemplars of the units at issue–so it is useful to ask for hardware during discovery, if you are not able to acquire exemplars in other ways. It's not clear from the redacted reports if NHTSA's testing of certain Toyota Camrys was done using the same firmware version on exactly the same hardware as the owners who experienced unintended acceleration. Hardware interactions can be one of the most important considerations of all when analyzing embedded software.
Sometimes a bug is not visible in the software itself. Such a bug may result from a combination of hardware and software behaviors or multiprocessor interactions. For example, one motor-control system I'm familiar with had a dangerous race condition. The bug, though, was the result of an unforeseen mismatch between the hardware reaction time and the software reaction time around a sequence of commands to the motor.
Additional analysis required
As you can see, the review of embedded software can be complicated. This is partly because the hardware of each embedded system is unique. In addition, the system as a whole generally involves complex interactions between hardware, software, and user. An expert in embedded software should typically have a degree in electrical engineering, computer engineering, or computer science plus years of relevant experience designing embedded systems and programming in the relevant language(s).
The five best practices I've presented here are meant to establish the critical importance of making certain specific requests early in the legal discovery process. They are by no means the only types of analysis that should be performed on the source code. For example, in any case involving the quality or reliability of embedded software, the source code should be tested via static-analysis tools. This and other types of technical analysis should be well understood by any expert witness or litigation consultant with the proper background.
In the case of Toyota's unintended acceleration issues, I hope that expert review in the class-action litigation against Toyota will include these and other additional types of analysis to identify all of the potential causes and determine if embedded software played any role. Though government funds for analysis by NASA are understandably limited, it's suggested that transportation safety organizations, such as NHTSA, should establish rules that ensure that future investigations are more thorough and that safety-related technical findings in litigation cannot be hidden behind the veil of secrecy of a settlement agreement.
Michael Barr is the author of three books and over fifty articles about embedded systems design, as well as a former editor-in-chief of this magazine. Michael is also a popular speaker at the Embedded Systems Conference, a former adjunct professor at the University of Maryland, and the president of Netrino. He has assisted in the design and implementation of products ranging from safety-critical medical devices to satellite TV receivers. You can reach him via e-mail at or read more of what he has to say at his blog ().