Think static analysis cures all ills? Think again.

Static code analysis has been around as long as software itself, but you'd swear from current tradeshows that it was just invented. Here's how to choose the right code-analysis tools for your project.

Static analysis (or static code analysis) is a field full of contradictions and misconceptions. It's been around as long as software itself, but you'd swear from current trade shows that it was just invented. Static analysis checks the syntactic quality of high-level source code, and yet, as you can tell from listening to the recent buzz, its findings can be used to predict dynamic behavior. It is a precision tool in some contexts and yet in others, it harbors approximations.

With this extent of contradiction, it's hard to believe that all of these statements are accurate. Static analysis, a generic term, only indicates that the analysis of the software is performed without executing the code. So, simple peer review of source code fits the definition just as surely as the latest tools with their white papers full of various incantations of technobabble.

There isn't much point in any such analysis existing in isolation since even if code is perfectly written, it's only correct if it meets project requirements. It's therefore also important to understand how well any such analysis fits within the development lifecycle.

No analysis is good or bad simply by virtue of being static or dynamic in nature. It follows that each analysis tool is neither good nor bad or perhaps more pertinently, appropriate or inappropriate just because they're statically or dynamically based. It's, then, important to look past the subtle advertising and self-congratulatory white paper proclamations to consider the relevant merits and demerits of static analysis and its ability to predict dynamic behavior. Can a solid static-analysis engine bypass the need for dynamic analysis? In this article, I explore current technologies and explain how static analysis predicts dynamic behavior. This article will help developers understand which method to use under which circumstances.

We'll look specifically at five key attributes of analysis tools, shown in the sidebars.

Key attributes of static-analysis tools
1. Automated code review automates the peer review process to enforce coding rules dictating coding style and naming conventions and to restrict commands available for developers to a safe subset. Code review doesn’t predict dynamic behavior, except to the extent that code written in accordance with coding standards can be expected to include fewer flaws that might lead to dynamic failure.

2. Formally-defined language (such as SPARK Ada) defines desired component behavior and individual run-time requirements. This may be in the form of specially formatted comments in the native language that are ignored by a standard compiler but can be statically analyzed to show that the program is “well-formed,” consistent with the design information included in its annotations, and has certain properties specified in those annotations. The annotations therefore make it possible to precisely predict dynamic behavior via static analysis. The Larch/C++ approach is similar in concept and uses a predicate-oriented interface language.

3. Prediction of dynamic behavior through static analysis models the high-level code to predict the probable behavior of the executable that would be generated from it. This approach builds an approximate mathematical model of the code and then simulates all possible execution paths through that model, mapping the flow of logic on those paths coupled with how and where data objects are created, used, and destroyed. This approximation is used to predict anomalous dynamic behavior that could possibly result in vulnerabilities, execution failure, or data corruption at run time.

The first three are attributes of static-analysis tools. Notably, these attributes don't comprehensively describe the categories of static-analysis tools, and many tools include more than one of these attributes.

The issue of what's static and dynamic analysis is further confused when there is a requirement to predict dynamic behavior. At that point, the dynamic analysis of code that has been compiled, linked, and executed offers an alternative to the prediction of dynamic behavior through static analysis.

Dynamic-analysis tools involve the compilation and execution of the source code either in its entirety or on a piecemeal basis. Again, while many different approaches can be included, these characteristics complete the list of the five key attributes that form the fundamental “toolbox of techniques.”

Key attributes of dynamic-analysis tools
4. Execution tracing (or code coverage analysis) details which parts of compiled and linked code have been executed often by means of software instrumentation probes that are automatically added to the high-level source code before compilation.

5. Unit testing snippets of software code are compiled, linked, and built in order that test data (also called “vectors”) can be specified and checked against expectations. Unit testing can be extended to include the automatic definition of test vectors by the unit test tool itself.

Test-tool vendors offer a plethora of combinations of these key static and dynamic attributes and claim their particular combination to be invaluable to the efficient and effective development of software.

Despite their lofty claims, no single vendor touts an offering that embraces all of these attributes. And, attempting to apply every one of the five techniques through a combination of tools would usually be prohibitively expensive both in terms of capital investment for the tools and labor costs for software testing.

Considering the alternatives
Given that none of the vendors are keen to highlight where their own offering falls short, some insight into how to reach such a decision on your own would surely be useful.

By considering which attributes are most appropriate for a particular situation, you then know which product to choose.

Although vendors make presentations assuming developers are to work on a virgin project where they can pick and choose what they like, that's often not the case. Many development projects enhance legacy code, interface to existing applications, are subject to the development methods of client organizations and their contractual obligations, or are restricted by time and budget.

The underlying direction of the organization for future projects also influences choices: 

  • Is this a quick fix for a problem project in the field? Is the search for a software-test tool that will resolve a mystery and occasional run-time error crash in final test?
  • Maybe there is a development on the order books that involves legacy code requiring a one-off change for an age-old client, but which is unlikely to be used beyond that.
  • Perhaps you have existing legacy code and want to raise the quality of software development on an ongoing basis for new developments and/or the existing code base.

Or perhaps there is a new project to consider, but the lessons learned from past problems suggest that ongoing enhancement of the software development process would be beneficial.

To address your particular situation, it's initially useful to consider how each of the five key attributes fits into the development process.

The diagram in Figure 1 superimposes the different analysis techniques on a traditional “V” development model. Obviously, your particular project may use another development model. In truth, the analysis is model agnostic. A similar representation could be conceived for any other development process model—waterfall, iterative, agile, and so forth.


Click on image to enlarge.

The extent to which it is desirable to cover all elements of the development cycle depends very much on the initial state of development and the desired outcome.

During the coding phase, the application of coding standards and hence the use of automated code review is the least controversial of the five attributes. Many tools help with this phase of development, so product selection hinges on:

  • Support of specific standards you need to comply with (such as MISRA, IEC 62304, CERT C, or internal company or project standards). Note that where tools claim to cover the same standard, the number of rules checked for that standard will vary from one tool to another.
  • How easily you can adopt the tool into your development process.

How the five are used
I describe tools slightly out of the order from which they appear in the sidebars:

Automated code review can be applied whether the code under development is for a new project, an enhancement, or a new application using existing code. With legacy applications, automated code review is particularly strong for presenting the logic and layout of such code in order to establish an understanding of how it works with a view to further development. On the other hand, with new development the analysis can begin as soon as any code is written—no need to wait for a compilable code set, let alone a complete system.

Formally-defined languages (or formal methods) are labor intensive and, although they have their benefits, tend to be limited to highly safety-critical applications where functional integrity is absolutely paramount over any financial consideration (such as flight-control systems). Even then, the alternatives outlined here often prove to be more financially prudent and offer similar levels of quality.

Unlike the prediction of dynamic behavior through static analysis, the use of Design by Contract principles often in the form of specially-formatted comments in the high-level code can accurately formalize and validate expected run-time behavior of source code.

Such an approach requires a formal and structured development process, textbook style, and uncompromising precision. What's more, applying such an approach to legacy code would involve a complete rewrite of it.

Unit testing and execution tracing focus on the behavior of an executing application and are therefore aspects of dynamic analysis. Unit, integration, and system analyses use code compiled and executed in a similar environment to that which is being used by the application under development.

Unit testing traditionally employs a bottom-up testing strategy in which units are tested and then integrated with other test units. In the course of such testing, individual test paths can be examined (execution tracing) to establish the most comprehensive coverage analysis. Clearly it's not necessary to have a complete code set available in order to initiate tests such as these.

Unit testing is complemented by functional testing, a form of top-down testing. Functional testing executes functional test cases, perhaps in a simulator or in a target environment, at system or subsystem level.

Clearly, these dynamic approaches test not only the source code, but also the compiler, linker, development environment, and potentially even target hardware. When the functionality of the code is the primary concern, you have little alternative but to deploy dynamic analysis. Unit test or system test must deploy dynamic analysis to prove that the software actually does what it is meant to do.

Alternatives do exist when robustness testing is the key concern, however.

The static prediction of dynamic behavior works well for existing code or less rigorously-developed applications. It doesn't rely on a formal development approach and can simply be applied to the source code as it stands, even when you have no indepth knowledge of the code. That ability makes this methodology very appealing for a development team in a fix—perhaps when timescales are short, but catastrophic and unpredictable run-time errors keep coming up during system test.

Prediction of dynamic behavior via static analysis can take a number of forms, but it commonly uses the software source code as a model to predict the behavior of that code when it's executed. Each operation is statically evaluated against a superset of the whole range of operating conditions that can occur during program execution. The developer can then analyze the whole data set applicable to the code under test, rather than discrete data points. Because this approach works with a model, developers can potentially use this technique before any code can be built and run. Such an advantage makes it appear to offer a universal solution.

There is, however, a major downside. The code itself is not executing, but instead is being used as the basis for a mathematical model. As proven by the works of Church, Gödel, and Turing in the 1930s, it is an unfortunate fact that the resulting mathematical model is always an approximation—and what's more, even with future enhancements or entirely new developments that will always be the case for the static prediction of dynamic behavior.

It will never be possible for the model to be a precise representation of the code because such representation is mathematical insoluble for all but the most trivial examples. In other words, the goal of finding every defect in a nontrivial program is unreachable unless approximations are included, which by definition will lead to “false positive” warnings.

The complexity of the mathematical model also increases disproportionately to the size of the code sample under analysis. This is often addressed by the application of simpler mathematical modeling for larger code samples in order to keep the processing time for the analysis within reasonable bounds. But the simplifications can increase the number of these “false positives,” which has a significant impact on the time required to interpret results. This trade-off can make the whole static-prediction approach unusable for complex applications.

Dynamic approach
Another alternative to the static prediction of dynamic behavior involves the automatic definition of test vectors using unit-test tools and hence performing dynamic analysis.

Unlike the static prediction of dynamic behavior, this dynamic approach does not and can never analyze the whole data set applicable to the code under test. It does, however, involve the intelligent analysis of likely problem values such as boundary and inflection conditions, and specifically includes both them and other target values as it automatically generates the test vectors.

It's significant that this technique is deploying most (or potentially all) of the development environment and hence testing something that reflects the finished product much more accurately than the static prediction of dynamic behavior. Despite that advantage, it too can be deployed very early in the development cycle.

What are we testing here?
Perhaps the most telling point with regards to the testing of dynamic behavior—whether by static or dynamic analysis—is precisely what is being tested. Intuitively, a mathematical model with inherent approximations suggests far more room for uncertainty compared with code being compiled and executed in its native target environment.

If the requirement is for a quick-fix solution for some legacy code that will find most problems without involving a deep understanding of the code, the prediction of dynamic behavior via static analysis has merit. Similarly, this approach offers quick results for completed code that is subject to occasional dynamic failure in the field.

However, if you need to prove not only the functionality and robustness of the code but also provide a logical and coherent development environment along with an integrated and progressive development process, it makes more sense to use dynamic unit and system testing. Dynamic unit and system testing enable you to prove that the code is robust and does what it should do in the environment where it will ultimately operate.

As soon as the process becomes a critical factor, the case for an extensive dynamic element to test is compelling.

Requirements management and traceability
Most test tools ignore the requirements element of software development. That is reflected in Figure 1 , in that none of the five key attributes directly covers requirement traceability at all. But the fact is that static and dynamic analyses fail to prove compliance with the functional requirements of the system. Even the best static and dynamic analyses will not prove that the software fulfills its requirements.

Maintain the bidirectional traceability of requirements
The intent of this specific practice is to maintain the bidirectional traceability of requirements for each level of product decomposition.

When the requirements are managed well, traceability can be established from the source requirement to its lower-level requirements and from the lower-level requirements back to their source. Such bidirectional traceability helps determine that all source requirements have been completely addressed and that all lower level requirements can be traced to a valid source.

Requirements traceability can also cover the relationships to other entities such as intermediate and final work products, changes in design documentation, and test plans. (ISO 26262 standard)

Widely accepted as a development best practice, requirements traceability ensures that all requirements are implemented and that all development artifacts can be traced back to one or more requirements. Most static and dynamic analysis vendors fail to provide what is needed by modern standards such as the automotive industry's draft standard ISO/DIS 26262 or the medical industry's IEC 62304 requiring bidirectional traceability. These standards place constant emphasis on the need for the derivation of one development tier from the one above it.

Such an approach lends itself to the model of a continuous and progressive use first of automated code review, followed by unit test and subsequently system test with its execution tracing capability to ensure that all code functions just as the requirements dictate, even on the target hardware itself—a requirement for the more stringent levels of most such standard.

While this is and always has been a laudable principle, last minute changes of requirements or code made to correct problems identified during test tend to put such ideals in disarray.

Despite good intentions, many projects fall into a pattern of disjointed software development in which requirements, design, implementation, and testing artifacts are produced from isolated development phases. Such isolation results in tenuous links among requirements, the development stages, and the development teams.

The traditional view of software development shows each phase flowing into the next, perhaps with feedback to earlier phases, and a surrounding framework of configuration management and process (such as Agile and the Rational Unified Process). Traceability is assumed to be part of the relationships between phases. However, the reality is that while each individual phase may be conducted efficiently, the links between development tiers become increasingly poorly maintained over the duration of projects.

The answer to this conundrum lies in the requirements traceability matrix (RTM), shown in Figure 2 , which sits at the heart of any project even if it's not identified as such. Whether the links are physically recorded and managed, they still exist. For example, a developer creates a link simply by reading a design specification and using that to drive the implementation.


Click on image to enlarge.

This alternative view of the development landscape illustrates the importance that should be attached to the RTM. Due to this fundamental centrality, it's vital that project managers place sufficient priority on investing in tooling for RTM construction. The RTM must also be represented explicitly in any lifecycle model to emphasize its importance as Figure 3 illustrates. With this elevated focus, the RTM is constructed and maintained efficiently and accurately.


Click on image to enlarge.

When the RTM becomes the center of the development process, it has an impact on all stages of design from high-level requirements through to target-based deployment.

  • The Tier 1 high-level requirements might consist of a definitive statement of the system to be developed. This tier may be subdivided depending on the scale and complexity of the system.
  • Tier 2 describes the design of the system level defined by Tier 1. Above all, this level must establish links or traceability with Level 1 and begin the process of constructing the RTM. It involves the capture of low-level requirements that are specific to the design and implementation and have no impact on the functional criteria of the system.
  • Tier 3's implementation refers to the source/assembly code developed in accordance with Tier 2. Verification activities include code rule checking and quality analysis. Maintenance of the RTM presents many challenges at this level as tracing requirements to source code files may not be specific enough and developers may need to link to individual functions.
         In many cases, the system is likely to involve several functions. Traceability of those functions back to Tier 2 requirements includes many-to-few relationships. It's very easy to overlook one or more of these relationships in a manually managed matrix.
  • In Tier 4 host-based verification, formal verification begins. Once code has been proven to meet the relevant coding standards using automated code review, unit, integration and system tests may be included in a test strategy that may be top-down, bottom up, or a combination of both. Software simulation techniques help create automated test harnesses and test case generators as necessary, and execution histories provide evidence of the “testedness” of the code.
         Such testing could be supplemented with robustness testing if required, perhaps by means of the automatic definition of unit test vectors, or through the use of the static prediction of dynamic behavior.
         Test cases from Tier 4 should be repeatable at Tier 5 if required.
         At this stage, we confirm that the software is functioning as intended within its development environment, even though there is no guarantee it will work when in its target environment. However, testing in the host environment first allows the time-consuming target test to merely confirm that the tests remain sound in the target environment.
  • Tier 5's target-based verification represents the on-target testing element of formal verification. This frequently consists of a simple confirmation that the host-based verification performed previously can be duplicated in the target environment, although some tests may only be applicable in that environment itself.

Where reliability is paramount and budgets permit, the static analysis of dynamic behavior with its “full range” data sets would undoubtedly provide a complementary tool for such an approach. However, dynamic analysis would remain key to the process.
Which approaches should you choose?
Each of the five key test tool attributes has merit.

There is a sound argument which supports traditional formal methods, but the development overheads for such an approach and the difficulty involved in applying it retrospectively to existing code limits its usefulness to the highly safety critical market.

Automated code review checks for the adherence to coding standards, and is likely to be useful in almost all development environments.

Of the remaining approaches, dynamic analysis techniques provide a test environment much more representative of the final application than static predictions of dynamic analysis and offers the means to provide functional testing.

Where requirements traceability is key within a managed and controlled development environment, the progressive nature of automated code review followed by unit, integration, and system test aligns well within the overall tiered concept of most modern standards. It also fulfills the frequent requirement or recommendation to exercise the code in its target environment.

Where robustness testing is considered desirable and justified, it can be provided by means of the automatic definition of unit-test vectors, or through the use of the static prediction of dynamic behavior. Each of these techniques has its own merits, with the former exercising code in its target environment, and the latter providing a means to exercise the full data set rather than discrete test vectors. Where budgetary constraints permit, these mutually exclusive benefits could justify the application of both techniques. Otherwise, the multifunctional nature of the unit-test tool makes it a cost effective approach.

If there is a secondary desire to evolve corporate processes towards the current best practise, both automated code review and dynamic analysis techniques have a key role to play in requirements management and traceability, with the latter being essential to show that the code meets its functional objectives.

If the aim is to find a pragmatic solution to cut down on the number of issues displayed by a problem application in the field, each of the robustness techniques—that is, the static analysis of dynamic behavior and the automatic definition of unit-test vectors—has the potential to isolate tricky problems in an efficient manner.

Mark Pitchford is a field applications engineer specializing in software test with with LDRA. He has over 25 years' experience in software development for engineering applications, the majority of which have involved the extension of existing code bases. 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.