Verifying embedded software functionality: Why it’s necessary - Embedded.com

Verifying embedded software functionality: Why it’s necessary

Editor’s Note: In this four part series Abhik Roychoudhury, author of Embedded Systems and software validation, explains why it is important for embedded developers to about new techniques such as dynamic slicing, metric based fault localization and directed testing techniques for assessing software functionality. In this Part 1: what must be done and how to achieve it.

Embedded software and systems have come to dominate the way we interact with computers and computation in our everyday lives. Computers are no longer isolated entities sitting on our desks. Instead, they are nicely woven and integrated into our everyday lives via the gadgets we directly or indirectly use—mobile phones, washing machines, microwaves, automotive control, and flight control.

Indeed, embedded systems are so pervasive, that they perform the bulk of the computation today— putting forward “embedded computing” as a new paradigm to study. In this series, we focus on validation of embedded software and systems, for developing embedded systems with reliable functionality and timing behavior.

Not all embedded systems are safety-critical. One one hand, there are the safety critical embedded systems such as automobiles, transportation (train) control, flight control, nuclear power plants, and medical devices. On the other hand, there are the more vanilla, or less safety-critical, embedded systems such as mobile phones, HDTV, controllers for household devices (such as washing machines, microwaves, and air conditioners), smart shirts, and so on.

Irrespective of whether an embedded system is safety-critical or not, the need for integrating validation into every stage of the design flow is clearly paramount. Of course, for safety-critical embedded systems, there is need for more stringent validation—so much so that formal analysis methods, which give mathematical guarantees about functionality/timing properties of the system, may be called for at least in certain stages of the design.

Our focus in this series is on what has been learned about software validation methods, and how they can be woven into the embedded system design process. Before proceeding further, let us intuitively explain some common terminologies that arise in validation—testing, simulation, verification, and performance analysis.

Testing refers to checking that a system behaves as expected for a given input. Here the system being checked can be the actual system that will be executed. However, note that it is only being checked for a given input, and not all inputs. Simulation refers to running a system for a given input. However, simulation differs from actual system execution in one (or both) of the following ways.

• The system being simulated might only be a model of the actual system to be executed. This is useful for functionality simulation—check out the functionality of a system model for selected inputs before constructing the actual system.

• The execution platform on which the system is being simulated is different from the actual execution platform. This situation is very common for performance simulations. The execution platform on which the actual system will be executed may not be available, or it might be getting decided through the process of performance simulations. Typically, a software model of the execution platform might be used for performance simulations.

Formal verification refers to checking that a system behaves as expected for all possible inputs. Because exhaustive testing is inefficient or even infeasible, verification may be achieved by statically analyzing a system model (which may be represented by a structure such as a finite-state machine).

Finally, we note that formal verification methods have conventionally been used for giving strict mathematical guarantees about the functionality of a system. However, to give strict guarantees about performance (for example, to give an upper bound on the execution time of a given software), one needs to employ mathematical analysis techniques for estimating performance. Such techniques often go by the name of performance analysis .

Formal verification in the auto industry
In order to see what the possibilities and opportunities are in terms of integrating validation into embedded system design flows, we can look at the automobile industry. It is widely recognized that automotive electronics is a wide market, with more and more functionalities in modern-day cars being software-controlled. Indeed, innovations

in automotive software can bring about new designs, a point often articulated by car manufacturers themselves. The by-now famous quotes such as “more than 90% of the innovation in a modern-day car is from the software” stand testimony to the importance of embedded software/systems in the design of a modern-day car.

Naturally, because of the importance of the various car components (brakes, airbags, etc.) functioning “correctly” during the driving of a car, rigorous validation of the hardware/software controlling these components is crucial. In other words, reliable and robust embedded system design flows that integrate extensive debugging/validation are a must.

To see further the importance of validation in embedded systems for automobiles, we can delve deeper into the various components of a car, which can be computer-controlled. Roughly speaking, these can be divided into three categories— engine features, cabin features, and entertainment.

Clearly, the engine features are the most safety-critical and the features related to in-vehicle entertainment are the least safety-critical. The engine features include critical features such as the brake and steering wheel; usually these features involve hard real-time constraints.

The cabin features include less critical (but important) features such as power windows and air conditioning. The entertainment or infotainment features include control of in-car devices such as GPS navigation systems, CD player, and in-car television, as well as communication between these devices.

Clearly, the computing component controlling the engine features (such as brakes) needs very rigorous validation—to the extent that the behavior of these computing components could be subjected to formal modeling and verification. For the cabin features, we at least need modeling and extensive testing of the computing components controlling the cabin features. For the infotainment features, we need performance analysis methods to ensure that the soft real-time constraints are satisfied.

Thus, as we can see from the discussion on the specific domain of automotive software, validation of different kinds are required for a complex embedded system. For the more safety-critical parts of the system, rigorous modeling and formal verification may be needed. For the less safety-critical parts, more extensive testing may be sufficient. Moreover, for the parts of the system controlling or ensuring real-time responses to/from the environment, detailed performance validation needs to be carried out.

Thus, the validation methods we employ can range from formal methods (such as model checking) to informal ones (such as testing). Moreover, the level of abstraction at which we employ the validation may vary—model-level validation  or high-level implementation validation (where we consider only the inter-component behavior without looking inside the components); or low-level implementation validation (where we also look inside the system components). Finally, the criteria for validation may also vary—we may perform validation at different levels, to check for functionality errors, timing errors, and so on.

Figure 5-1 below visually depicts the intricacies of embedded system validation. Inparticular, Figure 5-1.1a shows the different levels (model/implementation) and criteria (performance/functionality) of system validation.


Clickon image to enlarge.
Figure 5-1. Issues in functionality and timing validation of embedded systems.

Figure 5-1b illustrates the complications in functionality validation. For an embedded system that we seek to construct, we may design and elaborate it at different levels of details (or different levels of abstraction). If we are seeking functionality validation, then the higher the level of detail, the lower the formality of the validation method.

Thus, for system design at higher levels of abstraction, we may try out fully formal validation methods. On the other hand, as we start fleshing out the implementation details of the system under construction, we may settle for more informal validation methods such as extensive testing.As opposed to functionality validation, the picture appears somewhat different for timing validation—see Figure 5-1c.

As is well understood, embedded systems often incorporate hard or soft real-time constraints on interaction of the system with its physical environment—or, for that matter, interactions between the different components of the system. Hence, timing validation involves developing accurate estimates of the “system response time” (in response to some event from the environment).

Clearly, as the details of the embedded system are fleshed out, we can develop more accurate timing estimates and, in that sense, perform more detailed timing validation. Thus, Figure 5-1 shows the issues in validating functionality versus validating timing properties—both of which are of great importance in embedded system design flows. Two different aspects are being highlighted here:

1 – Formal verification of functionality is better conducted at higher levels of abstraction. As we start considering lower level details, formal approaches do not scale up, and informal validation methods such as testing come into play.

2 – For performance validation, as we consider lower level details, our performance estimates are more accurate.

The reader should note that other criteria along which embedded system validation may proceed, such as estimating the energy or area requirements of a system, also have certain basic similarities with timing validation. As the system design is elaborated in more detail, we can form a better idea about its timing, energy, and area requirements.

Economics: the basic software quality driver
Let us illustrate the economic issues that drive interest in software testing and debugging. A report on the “Economic Impacts of Inadequate Infrastructure for Software Testing” published in 2002 by the Research Triangle Institute and the National Institute of Standards and Technology (USA) estimates that the annual cost incurred as a result of an inadequate software testing infrastructure all over the United States amounts to $59.5 billion—0.6% of the $10 trillion U.S. GDP.

Industrial studies on quality control of software have indicated high defect densities. An ACM Crosstalk article reports case studies where on an average 13 major errors per 1000 lines of code were reported. These errors are observed via slow code inspection (at 195 lines per hour) by humans.

So, in reality, we can expect many more major errors. Nevertheless, conservatively let us fix the defect density at 13 major errors per 1000 lines of code. Now consider a software project with 5 million lines of code (theWindowsVista operating system is 50 million lines of code, so 5 million lines of code is by no means an astronomical figure ).

Even assuming a linear scaling up of defect counts, this amounts to at least 65,000 major errors. Even if we assume that the average time saved to fix one error usingan automated debugging tool as opposed to manual debugging is 1 hour (this is a very modest estimate; often, fixing a bug takes a day or two), the time saved is 65,000 man-hours (1477 work weeks or  30 man-years.)

Clearly, this is a huge amount of time that a company can save, leading to more productive usage of its manpower and saving of precious dollar value. Assuming an employee salary of $ 40,000 per year, the foregoing translates to $ 1.2 million savings in employee salary simply by using better debugging tools.

A much bigger savings, moreover, comes from customer satisfaction. By using automated debugging tools, a software development team can find more bugs than via manual debugging, leading to increased customer confidence and enhanced reputation of the company’s products. Finally, manual approaches are error-prone, and the chances of leaving bugs can have catastrophic effects in safety-critical systems.

Related Terminology
To clarify the terminology related to dynamic checking methods, let us start with the “folklore” definition of software bug in Wikipedia:

A software bug (or just “bug”) is an error, flaw, mistake, “undocumented feature,” failure, or fault in a computer program that prevents it from behaving as intended (e.g., producing an incorrect result). Most bugs arise from mistakes and errors made by people in either a program’s source code or its design, and a few are caused by compilers producing incorrect code. A program that contains a large number of bugs, and/or bugs that seriously interfere with its functionality, is said to be buggy. Reports detailing bugs in a program are commonly known as bug reports, fault reports, problem reports, trouble reports, change requests, and so forth.

The conventional notion of a software bug is an error in the program that gets introduced during the software construction. It is worthwhile to note that the manifestation of a bug may be very different from the bug itself. Thus, the main task in software debugging is to trace back to the software bug from the manifestation of it. Agood bug report will be able to take in a manifestation of a bug and locate the bug. In case this sounds unclear, let us consider the following program fragment marked with line numbers, written in Java style:

1. void setRunningVersion(boolean runningVersion)
2.       if ( runningVersion ) {
3.                 savedValue = value;
         }
        else{
4.             savedValue = “”;
        }
5      this.runningVersion = runningVersion;
6.     System.out.println(savedValue);
}

Suppose this program is “buggy,” the bug being that the variable savedValue is set to a wrong value in line 4. However, the manifestation of the bug is different— the variable savedValue is printed in line 6, and that is where the bug is indeed manifested. So, naturally there is a “distance” between where the software error is, and where it is observed (possibly via an output or a program crash ).

As another example, consider the following program fragment, written in C style:

1.   a = 1;
2.   b = a;
3.   c = b;
4.   if (c){
         v = 10;}
5.   else { v = 20;}
6.   println(“%d”, v);

Suppose the bug is in line 1, where variable a is set to a wrong value. Let us see how this bug will be manifested. The wrong value of variable a will be propagated to variable b—thereby “infecting” variable b. This wrong value will then be passed from variable b to variable c. Based on the wrong value passed to variable c, a branch or a decision will be made in line 4 and, in this case, the decision for the branch evaluation is wrong as a result.

Because of the wrong branch evaluation, the variable v is set wrong, and this wrong value is printed in line 6—the manifestation of the “bug” in line 1! So, as we can see, the bug in a program is usually quite different from its manifestation during program execution.

Now, what should a debugging method do? Of course, while testing the software, that is, running it against selected test cases, the programmer can see only the manifestation of the bug and not the bug itself! The task of a debugging method is to start from the manifestation the bug, and trace back to the bug itself.

So, in the preceding C program fragment, the observable error will be an unexpected value of variable v being printed. From here, the debugging method has to reason that (i) variable v was set in lines 4 or 5, (ii) the setting of variable v depends on a branch that is evaluated based on the value of c, (iii) the value of c depends on the value of b, and (iv) the value of b depends on the value of a.

Thus, the reasoning here uncovers a chain of dependencies starting from the observable error (line 6), in order to locate the error cause (in line 1). We now discuss the dynamic slicing method, which traverses an execution trace to uncover the program dependency chains of an observable error. The lines of program captured in these program dependency chains are highlighted in a bug report, which is also called the “slice.” The programmer can then inspect the bug report to locate the probable error causes.

Manual versus Automated
To understand the power of the dynamic slicing methodology, it is important to compare it to conventional software debugging tools (Figure 5-2 below ) such as the gdb for C, jdb for Java, or VBwatch for Visual Basic.

All of these tools essentially track the program execution for a given input. The programmer can set “breakpoints,” guiding the tool to freeze the program execution at specific control locations, and then observe values of specific variables at these locations. However, note that the entire debugging process is still manual.

Figure 5-2. Software engineering without a model: possible validation mechanisms

The programmer has to instruct the debugging tool about where to stop (i.e., where to set the breakpoint), and then manually observe selected variables at these breakpoints. The tool is only keeping track of the program execution, but not analyzing the program execution in any way!

Thus, existing debugging tools do not employ any analysis of the execution trace—they only record or profile the execution trace and display the trace information. The real issue at hand is not the visualization of the trace information—many of the existing debuggers have detailed graphical user interfaces (GUIs) for this purpose.


Clickon image to enlarge.
Figure 5-3. Snapshot of a conventional debugger (gdb for C)

Figure 5-3 above shows a snapshot of a conventional debugger -actually the well-known gdb debugger for C. It collects and lets the user visualize relevant information about the program execution—the figure shows the user inquiring about the value of a program variable h at a specific control location of the program.

What is missing is an analysis of the execution trace to explain a possibly unexpected value of the variable h—this has to be done manually by the user. As we shall see in Part 2, the dynamic slicing method provides such an analysis.

Next in Part 2 : The importance of dynamic code slicing.

Abhik Roychoudhury is associate professor at the National Univdrsity of Singapore. He received his Ph.D. in computer science from the State University of New York at Stony Brook. His research interests are system modeling and validation with a specific focus on embedded software and systems.

Used with permission from Morgan Kaufmann, a division of Elsevier.Copyright 2009, from “Embedded Systems and Software Validation,” by Abhik Roychoudbury. For more information about this title and other similar books, visit www.elsevierdirect.com

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.