Integrate static analysis into a software development process
These tools will give you higher reliability and improved quality for your embedded software.
Software reliability is an increasing risk to overall system reliability. As systems have grown larger and more complex, functionality in mission- and safety-critical systems is more often exclusively controlled through software. This change, coupled with improving reliability in hardware modules, has shifted the root cause of systems failure from hardware to software.
Static analysis is a technique that can improve the quality and reliability of embedded systems software. Integrating static-analysis tools and techniques into the development process can yield significant reductions in development testing and field failures. However, integrating static analysis into a development process can be a daunting, especially if a large amount of legacy code is used in the development projects.
Static code analysis is a broad term for a set of techniques used to aid in the verification of computer software without actually executing the programs. The sophistication of the analysis varies greatly depending on the tool employed. The simplest tools often only search source code for text pattern matches or calculate basic program metrics (such as Cyclomatic complexity or Halsted complexity) to determine the likelihood of problems arising from a given code segment. More advanced static-analysis tools act as an advanced compiler for the source code, deeply analyzing both execution and data flow for faults that may lead to a field failure. Some of the most advanced tools will also include link information in their analysis to determine higher level problems.
Static analysis of source code doesn't represent new technology. Commonly used during implementation and review to detect software implementation errors, static analysis has been shown to reduce software defects by a factor of six,1 as well as detect 60% of post-release failures.2
The uses for static analysis during the development process are quite large. The safety-critical software developers have long been proponents of using static-analysis tools for critical applications. However, static-analysis tools offer many advantages to those working in less critical areas. One report claims that static analysis can remove upwards of 91% of errors within source code.3
Depending upon the specific tool employed, static-analysis techniques can detect buffer overflows, security vulnerabilities, memory leaks, timing anomalies (such as race conditions, deadlocks, and livelocks), dead or unused source code segments, and other common programming mistakes. Recent research has shown a relationship between the faults detected during automated inspection and the actual number of field failures occurring in a specific product.
Currently more than three dozen static-analysis tools are readily available for software development, some of which are shown in Table 1. Static-analysis tools exist for most common programming languages, though the majority of tools support C, C++, or Java.
Static-analysis tools can be classified in many different ways. The most common classification is by development intent, either academic- or research-based versus commercial tools. Academic tools are often released under an open-source agreement and thus are readily available and inexpensive. However, because these tools are often offshoots from a research project, they may not be fully documented or supported. Beyond this basic set of classifications, static-analysis tools can be classified by the types of faults they're intended to detect or by the methodology they employ to detect those faults.
The simplest static-analysis tools operate simply by searching the source-code text for textual patterns matching known faults. A simple conceptual example can be provided using the UNIX grep tool and a segment of C code. In this example, the code shown in Listing 1 is to be searched for locations in which the assignment operator (=) was inadvertently typed instead of the equality operator (==). The commands issued and results are shown in Listing 2.
While these simple textual searches can detect faults, they suffer from many inaccuracies. Because the lexical rules of the language aren't recognized, false positives can easily be tagged; in this example, a harmless comment was tagged. Other problems with simple textual matching occur based upon variable and function names used by a programmer. If, for example, a security scanner is looking for potential misuses of the getc routine, a customized getc developed by the programmer may be flagged as well. Nevertheless, simple textual searches can be employed for rudimentary purposes.
Style-checking tools are static-analysis tools intended to verify the compliance of a given source-code module with existing coding style guidelines. Style-checking tools are often used to detect ill-formed variable and method names, comment problems, indentation inconsistencies, and other issues related to coding style guidelines. As such, style-checking tools often don't find the faults that lead to software failure.
Semantic-analysis tools extend the compiler parse tree by adding semantic information. This additional information is then validated against a set of rules, looking for violations that represent a statically detectable fault. Typical faults that can be detected through semantic analysis include data-type problems, uninitialized variables, and unused methods.
Deep-flow static-analysis tools extend semantic analysis to include control-flow graph generation and data-flow analysis. Depending on the specific tool employed, deep-flow static-analysis tools can capture faults related to race conditions and deadlocks, pointer misuses, and other faults. Other static-analysis tools employ meta compilation and abstract interpretation to further improve their analysis capabilities.
Current static-analysis tools
Lint is one of the first and most widely used static-analysis tools for C and C++. Lint, initially released for UNIX environments by Bell Labs, checks programs for a large set of syntax and semantic errors. However, Gimpel Software has released a version that operates in DOS, Windows, and OS/2 environments, as well as with UNIX. Newer versions of Lint include value tracking, which can detect subtle initialization problems, and inter-function value tracking, which tracks values across function calls during analysis and strong type checking. Lint can also validate source code against common safer programming subsets, such as the MISRA C standards and the Scott Meyers' Effective C++ series of standards.
Lint also supports code-portability checks, which can verify that there are no known portability issues with a given set of source code. Several add-on companion programs can aid in the execution of the Lint program. One such tool, ALOA, automatically collects a set of metrics from the Lint execution that can help in doing a source-code QA check. ALOA provides an overall lint score, as well as break downs by source-code module of the number and severity of faults discovered. ALOA is available under the GPL and can be embedded into several integrated development environments.
QAC, and its companion tools QA C++, QAJ, and QA Fortran, have been developed by Programming Research. Each tool is a deep-flow static analyzer tailored to the given languages. This tool can detect language implementation errors, inconsistencies, obsolescent features, and programming standard transgressions through code analysis. Version 2.0 of the tool issues over 800 warning and error messages, including warnings regarding non-portable code constructs, overly complex code, or code that violates the ISO/IEC 14882:2003 C++ standard. In addition to providing deep-flow static analysis of the source code, QAC generates a set of metrics and displays to help the software engineer understand the source code. The QAC and QAC++ tools can validate several different safer coding standards, including the MISRA C coding standard, and the High Integrity C++ coding standard.
The Polyspace Ada Verifier was developed as a result of the Ariane 501 launch failure and can analyze large Ada programs and reliably detect run-time errors. Polyspace C++ and Polyspace C verifiers have subsequently been developed to analyze these languages. The Polyspace tools rely on a technique referred to as abstract interpretation. The Polyspace Verifier tools have been heavily used in the transportation sectors, namely the automotive, aerospace, and railway transport areas.
CodeSonar from GrammaTech is a deep-flow static-analysis tools for C and C++ source code. The tool can detect many common programming errors, including null-pointer dereferences, divide-by-zeros, buffer overruns, buffer underruns, double-frees, use-after-frees, and frees of non-heap memory. CodeSonar has multiple methods for viewing analysis results. Output can be directed into an HTML formatted file, which can then be viewed using a customary web file. Data can also be output in an XML format, which can be post-processed by a user-developed application into the desired format. Lastly, CodeSonar has an integrated graphical user interface, CodeSurfer, which lets the user examine the analysis output using a paradyne similar to a graphical debugger.
The NASA C Global Surveyor Project (CGS) was intended to develop an efficient static-analysis tool. The tool was brought about to overcome scalability issues present in current tools. CGS analyzes each instruction within the source code of a C program, determining if the instruction sequences can lead to a run-time error. CGS uses abstract interpretation techniques as the basis for its static analysis of C source code. To improve performance, CGS was designed to allow distributed processing for larger projects in which the analysis can run on multiple computers.
CGS results are reported to a centralized SQL database. While CGS can analyze any ISO C program, its analysis algorithms have been precisely tuned for the Mars PathFinder programs. This tuning results in less than 10% false warnings being issued. Other C projects may have differing results. CGS operates only in the Linux environment.
Static analysis for Java
The Extended Static Checker for Java (ESC/Java) was developed at Compaq's Systems Research Center (SRC) as a tool for detecting common errors in Java programs. The tool analyzes annotated source code to detect common programming errors, such as null dereference errors, array bounds errors, type cast errors, and race conditions that can result in a run-time exception and program failure. ESC/Java uses program-verification technology, and includes an annotation language with which programmers can use to express design decisions using lightweight specs. ESC/Java checks each class and each routine separately so the tool can be applied to code that references libraries without the need for library source code. This also enables programmers to analyze modules whose library code hasn't yet been developed, as well as improving the tool's scalability.
JLint is a static-analysis program for Java initially written by Konstantin Knizhnik and extended Cyrille Artho. JLint checks the Java code using data-flow analysis, abstract interpretation, and the construction of lock graphs. JLint is architected as two separate programs that interact during analysis, the AntiC syntax analyzer and the JLint semantic analyzer. As Java inherits the majority of the C/C++ syntax, it inherits many of the failure modes present in C and C++.
The first part of JLint analysis consists of running the source code through the AntiC syntax verifier, which detects bugs related to token definition, operator priorities, and statement bodies. The Semantic verifier portion of JLint extracts information from Java class files. JLint performs local and global data-flow analysis. Local-flow analysis catches redundant and suspicious calculations. Global-method invocation flow detects the invocation of methods with possible null parameter values. JLint detects deadlock scenarios in multithreaded programs by generating a lock-dependency graph. This is also used by JLint to detect potential race conditions within source code.
JiveLint by Sureshot Software is a static-analysis tool for Java. It has three fundamental goals, namely to improve source-code quality by pointing out dangerous source-code constructs, to improve readability, maintainability, and debugging through enforced coding and naming conventions, and to communicate knowledge how to write high-quality code. JiveLint is a standalone Windows application that doesn't require Java to be installed and is Windows 95/98/ME/2000/NT/XP compatible.
Lint4j (Lint for Java) is a static analyzer that detects locking and threading issues, performance and scalability problems, and checks complex contracts such as Java serialization by performing type, data flow, and lock graph analysis. In many regards, Lint4j is similar to JLint. The checks within Lint4j represent the most common problems encountered while implementing products designed for performance and scalability. Lint4j is written in pure Java and will therefore execute on any platform that has Java JDK or JRE 1.4 installed.
The Java PathFinder (JPF) program is a static-analysis model-checking tool developed by the Robust Software Engineering Group at the NASA Ames Research Center and is available under Open Source Licensing. This software is an explicit-state model checker which analyzes Java bytecode classes for deadlocks, assertion violations, and general linear-time temporal logic properties. The user can provide custom property classes and write listener-extensions to implement other property checks, such as race conditions. JPF uses a custom Java Virtual Machine to simulate program execution during the analysis phase. JPF can check nearly any Java program that doesn't depend on unsupported native methods. JPF has shown to be effective for verifying concurrent Java programs.
FindBugs is a lightweight static-analysis tool for Java with a reputation for uncovering common errors. It automatically detects common mistakes using "bug patterns," which are code idioms that represent common software mistakes. Bug patterns arise in Java for various reasons, like difficult language features, misunderstood API methods, misunderstood invariants when code is modified during maintenance, typos, and the use of the wrong Boolean operator. FindBugs is available as open-source software and includes support for several different invocation methods.
ITS4 is a static vulnerability scanner for C and C++, developed by Cigital. The tool was developed as a replacement for a series of grep scans on source code used to detect security vulnerabilities as part of Cigital's consulting practice. ITS4 is a simple, command-line tool for UNIX and Windows platforms. It's available as open-source software, though limitations are placed on commercial use of the tool. ITS4 scans the source code, looking for function calls like sprintf, strcpy, strcat, system, and popen that could be exploited by those with malicious intent. ITS4 attempts to analyze the ramifications of the vulnerability, resulting in a risk assessment for each vulnerability. The tool's output includes a complete report of results as well as suggested fixes for each vulnerability.
LCLint is a product of Massachusetts Institute of Technology's Computer Science Lab and the Digital Equipment Corporation (DEC) Systems Research Center. In addition to detecting many of the standard syntactical issues, LCLint detects violations of abstraction boundaries, undocumented uses of global variables, undocumented modifications of states visible to clients, and missing initializations for an actual parameter or use of an uninitialized formal parameter. Splint is the successor to LCLint as the focus was changed to include secure programs. The name is extracted from SPecification Lint and Secure Programming Lint.
Splint is a lightweight static-analysis tool. It extends LCLint to include checking for dereferencing a possible null pointer, using possible undefined storage, or returning storage that's not properly defined, type mismatches, violations of information hiding, memory management errors (including uses of dangling references and memory leaks), and dangerous aliasing. It also takes care of modifications and global variable uses inconsistent with specified interfaces, problematic control flow (such as infinite loops), fall-through cases or incomplete switches, suspicious statements buffer overflow vulnerabilities, dangerous macro implementations or invocations, and violations of customized naming conventions.
Flawfinder was developed by David A. Wheeler to analyze C and C++ source code for potential security flaws. It operates in a similar manner to ITS4. An internal database known as the ruleset stores functions that exhibit potential security flaws. The standard ruleset includes both general issues that can have an impact on any C or C++ program, as well as specific UNIX-like and Windows functions that can be exploited. Flawfinder searches the source code for calls to the risky methods and outputs each match (hit). Listings of hits can be stored for later analysis or processing. Processing generates a list of potential security flaws sorted by their risk, which varies between 0 (smallest risk) and 5. Risk is calculated based on both the method being invoked, as well as the type and value of the parameters passed into the function.
The Rough Auditing Tool for Security (RATS) is another basic lexical analysis tool for C and C++, similar to ITS4 and Flawfinder. As implied by its name, RATS only performs a rough analysis of source code for security vulnerabilities, and it won't find all errors.
The Eau Claire static-analysis tool finds errors in C programs that could result in security breaches. Example errors buffer overflows, file-access race conditions, and string formatting errors. Eau Claire works by translating a programs source code into a series of verification conditions and presenting the verification conditions to an automatic theorem prover, Simplify.
MOPS (MOdelchecking Programs for Security properties) was developed by Hao Chen in collaboration with David Wagner to find security bugs in C programs and to verify compliance with rules of defensive programming. MOPS targets developers of security-critical programs and those tasked with reviewing the security of existing C code. MOPS checks for violations of temporal safety properties that dictate the order of operations in a sequence. For example, a C program should never perform a setuid-root process, then execute an untrusted program without first dropping its root privilege.
Integrating static analysis
Integrating static-analysis tools into the software-development process offers many significant advantages. By requiring the tools' use, many faults that would otherwise lie dormant and not be detected until a code inspection is conducted, will now be caught earlier in the process, as the developer is developing the source code. Bugs found during this phase can be 5 to 10 times cheaper to repair than if left for the testing phase.
Developing a coding standard
Integrating static-analysis tools into the development process begins by establishing a standardized rule set for source code. This is best handled by establishing two segments, a coding standard and a style guidelines document. The latter provides stylistic guidance for developing source-code modules. Items that should be covered in the style guidelines include appropriate copyright notices, requisite method commenting, indentation, naming conventions, and other stylistic guidelines.
The coding style guideline will typically raise many arguments from programmers who don't like the defined style, and because most of the style guidelines don't directly result in software failure, the guidelines should be treated as such and not enforced as rigidly as those found in the coding standards document. A style-checking tool can greatly assist in this area, and compliance can be greatly enhanced if the appropriate templates are made available for the many modern code editors. Most modern source code editors (such as Eclipse, JEdit, or CodeWright) include tools that automatically format source code to be compliant with a given coding style.
The coding standard document defines which coding constructs shouldn't be used in project development. Coding standard rules should be strictly enforced. For example, the rule "all variables shall be assigned a value before being used in any operation" is one which can be statically detected and can be easily understood by the typical embedded systems programmer. In addition to defining the rules for the coding standard, it's also important to provide justification for the rules. In the variable rule example, for instance, the justification for the rule might be "uninitialized variables can lead to random program operation and resulting field failures." This rationale helps a software engineer understand why the given rule is important and guides the engineer when he or she is assessing if a deviation is warranted.
Constructing a thorough and complete coding standard may seem like a daunting task at first. However, several coding standards are currently available for C and C++ that can be used as a starting point. One of the most well known of the C standards would be MISRA C. When starting from an available standard, the local standard may simply document which rules are provided with a blanket exception and guidance in deviating from other rules.
As part of the coding standard, it's important to have a documented and detailed method for handling deviations from that standard. Although the individual circumstances will differ for each development process, several engineers familiar with the standard should review the deviations before they are approved. It's important to have a rigorous but usable deviation process for the coding standard. If deviations are allowed to occur frequently and easily, the purpose for the standard will be defeated. Yet, if it's too difficult to obtain a deviation, the development process may become unmanageable.
Once a coding-standard document has been developed, it's important to automate checking as much as is possible. Note that the checking must be repeatable and foolproof. Repeatability is important because static analysis results would have little value if one engineer obtains completely different results than a second engineer. Foolproof requires that the process for running the static-analysis tool be both intuitive and time efficient.
One of the best methods to obtain this performance is to integrate the static-analysis tools into an automated build script. Each organization will have a slightly different configuration management system, depending on the capabilities of the version management system employed and the philosophy of the system administrator. The best use of a configuration management system comes from full automation of the build process. With full automation, the user simply selects which version of code is to be built and the build script will automatically validate that the correct compiler is installed, obtain and compile the source code, and create a link image.
Assuming that such a system exists, integration of the static-analysis tool into the development process can be achieved by simply adding a stage to the build process. A general flowchart for such a build script is shown in Figure 1. This additional step would validate that the correct static-analysis tool version has been installed, check out the configuration file(s) for the given static-analysis tool, and execute the analysis tool on each source code file as compilation occurs. By doing this, it's possible to achieve highly repeatable and valid analysis results.
While integrating the static-analysis tool into the build script has its merits in achieving repeatability, there may be a slight performance penalty when executing the build. Because of this, it might be worthwhile to include an option in the build script to inhibit tool execution during development builds. This option will enable development engineers to build their code efficiently during routine development, yet have an easy way to statically verify their code before archiving any revisions. In any case, if an inhibit option is included, execution of the static-analysis tool on a build should be required before the final release occurs.
Having static-analysis tools available doesn't necessarily mean that they'll be used to their fullest potential. Given schedule pressures and other tasks, the temptation to not run static-analysis tools can be great. One way to combat this problem is to require that the tool's output be included in the software formal review package. This model results in the general process flow shown in Figure 2.
This model has several advantages. First, it forces engineers to use the static-analysis tools before their code is submitted for formal review. Second, it provides a convenient way for multiple engineers to review and discuss any deviations from the coding standard, as well as document the rationale behind the deviations. Third, this mechanism hopefully imparts pride of craftsmanship into the engineers developing code, in that the static-analysis results will be reviewed by their peers before a module is released.
Retaining the results from static analysis is an often overlooked but important step when integrating static analysis into the development process. Retaining the results provides for future reviews of the module as well as an audit trail proving that static analysis was performed on the given source code. With the exact results archived in the configuration management system, it's easy to prove that the tool was executed during development per the process. In a contract environment, a customer may request the static-analysis output for review.
Storing results for future review represents one of the most important reasons that static-analysis results should be stored. In the event that a future fault is found in a software, there may be a need to search other areas of the source code for potential occurrences or to trace back to a particular revision. Storing results in the archive is best handled by storing an XML file with the output from the static-analysis tool. By using XML, many database tools can easily import and process the data, allowing the generation of reports, trends, and other useful metrics.
Reports and trend information are often important to use when managing the static-analysis portion of the development process. If trend data indicates a significant number of projects are being forced to deviate from a given coding standard, it may be advisable to revisit the given rule. It may be too strict or not adequately defined given the domain for the software development. Other trends may reflect a need to update the training of engineers on the coding standards and re-emphasize the rationale for various coding rules.
Incorporating static analysis into the development of new source code is an easy way to begin using the tools. By starting at the beginning, engineers can minimize rule violations and tailor the source code to comply with the analysis toolset. For new code, it's straightforward to enforce compliance through version-management practices. As previously stated, by bringing the output from a static-analysis tool to a code inspection, it's possible to review the static-analysis outputs and log any changes that must be made before the code is released.
Other mechanisms that can easily be employed with new code include a "source-code librarian" who is responsible for the release and integration of all elements into a final project. This person, acting as part of the quality assurance tasks for a project, merges individual engineering branches into the final release code. If a given module isn't clean enough for release, the module won't be merged into the release candidate.
Applying static analysis to existing code that's not intended for obsolescence can be challenging. Depending on the age of the code, the engineer's programming style, and the paradigms used, applying static analysis to existing code can range from difficult to nearly impossible if a disciplined approach isn't followed. Many legacy projects have approached static analysis only to abandon it when the first run of the tool generates 100,000 or more warnings. With legacy code, it's often not practical to remove all statically detectable faults.
Treat the repair of statically detectable faults in the same domain as bug fixes. Each time a fault is removed, there's the possibility of injecting a more serious fault into the module. The worst thing would be to attempt to repair a false-positive that was statically detected as a fault and inject a failure. However, this analysis must also be conducted carefully and diligently. After all, there is the potential for each statically detectable fault to cause a failure. While this has manifested itself many times, one of the most blatant examples would be the failure of Ariane V, documented by Hatton.4 With legacy code, the most important information to track isn't necessarily the presence of statically detectable faults, but the change in the number of faults as revisions are made to that code. The concepts behind this are shown in Figure 3.
Applying static analysis to an existing project begins in much the same manner as developing a coding standard for the organization. Initially, without any concern for the number of statically detectable faults that will be detected, the static-analysis tool should be run on the existing code base. From this initial baseline of results, a set of reports can be generated to determine the appropriate path for the existing software.
One report that's necessary is a profile of which warnings are present within the source code. There may be frivolous warnings that under normal circumstances would be fixed but for a legacy project are going to be left alone. One report would characterize only those warnings that are deemed to be most severe within the existing code base. Based on a risk assessment, these may be immediately fixed by the development team.
One last report would profile the statically detectable fault rates versus the size of the associated modules. Research has shown a correlation between the statically detectable fault density and the prevalence of a module to fail in the future. This can be used as a guide for future project planning. It may be that the rewriting of a portion of a module fraught with statically detectable faults is expedited due to this report.
At this point in a development, an XML report of statically detectable faults should be archived for the module, establishing the baseline for this module. As future changes are made to the module, the statically detectable faults output from a given analysis tool are compared against this dataset. Future patches should ensure that in fixing bugs or adding features to this module, the statically detectable fault profile left over from the original code should result in the number and locations of faults staying the same or decreasing versus the initial baseline.
The education component of integrating static-analysis tools into the software development process can't be overlooked. The appropriate use of static-analysis tools requires that engineers be familiar with their capabilities as well as their limitations. It's not enough to see a fault and edit the source code until the warning disappears. The engineer must understand the rationale for the fault and be able to develop an appropriate plan of attack to remove it. It's important to allow enough time to educate engineers with the tools, the coding standard, and the software-development process. Most tool vendors offer some form of training, either on site, on the Web, or at a remote location. There are also numerous consulting companies that offer such training. Furthermore, most static-analysis tools come with extensive online documentation, including tutorials.
Training engineers in the software coding standard is something that must be conducted at each location. Each company will have its own customized software-development process, coding standard, and other rules and regulations that have an impact on the use of static analysis. Training courses are available for given coding standards if the local standard is based on an external standard. For example, many courses are available for the MISRA C standard.
One of the most critical steps for integrating static analysis is the development of the coding standard. From this standard, the next important step is to automate the compliance verification. This is most easily done by integrating the analysis tool into the configuration management system and build process, which enables teams to conduct a repeat analysis quickly and deterministically.
Whereas using that new coding standard is often easier with new code developments, it can be applied to legacy code if appropriate care is taken. Legacy packages are often developed without a consistent style and have undergone numerous patch modifications since initially being developed. Because of this, it's often difficult to applying existing coding standards to legacy code.
In conclusion, it's important to remember that static analysis is not a silver bullet for solving all software development problems. It's a powerful concept and can significantly aid in development of higher quality software. However, it's only one of the many tools necessary to develop quality software. Quality development still requires attention to detail during the requirements analysis phase, the design phase, and the testing phases as well as the commitment of the software engineers, the software managers, and others involved with the project.
Walter Schilling is a doctoral candidate at the University of Toledo, studying software reliability. He has worked in the automotive industry as an embedded software engineer. He can be reached at email@example.com.
Mansoor Alam is a professor of electrical engineering and computer science at the University of Toledo, Ohio. He received his BScEng from Aligarh Muslim University, and ME and PhD from Indian Institute of Science, Bangalore, in electrical engineering. His research includes fault-tolerant systems and reliability, MPLS networks, scheduling algorithms in multiservice routing switches, and performance analysis of high-speed networks. He has published in IEEE Transactions, refereed international conferences and journals, and is director of the Ohio Communications and Computing Advanced Research Network (OCARNet). Contact him at firstname.lastname@example.org.
1. Xiao, S. and C. H. Pham, "Performing high efficiency source code static analysis with intelligent extensions," APSEC, 2004, pp. 346"355.
2. Q. Systems, "Overview large Java project code quality analysis," QA Systems, Tech. Rep., 2002.
3. Glass, R. L., "Inspections--some surprise findings," Commun. ACM, vol. 42, no. 4, pp. 17"19, 1999. [Online]. Available: http://portal.acm.org/citation.cfm?id=293411.293481#
4. Hatton, L. "Ariane 5: A smashing success," Software Testing and Quality Engineering, Vol. 1, no. 2, 1999.