Automated tools streamline software test and certification - Embedded.com

Automated tools streamline software test and certification

In this Product How-To design article, Jared Fry and Shan Bhattacharya of LDRA go into detail on how to use the company’s tool suite to analyze code to trace requirements, identify and fix standards violations, monitor code coverage, and rapidly and effectively bring reliable products to market.

As software becomes increasingly important in safety-critical medical, automotive, aviation, and industrial systems, the risks presented by coding errors have intensified from merely squandering time to endangering lives, destroying property, and costing millions of dollars. In response, a variety of standards such as IEC 62304 (medical), ISO 26262 (automotive), DO-178C (aviation), EN 50128 (railway) and IEC 61508 (industrial) have been developed to put the focus on code quality and risk mitigation to promote the development of robust, reliable, error-free software.

Requirements fulfillment and bug-free integration of collaborating systems are of paramount importance to the multipart development teams common with complex safety-, mission- and security-critical products. These teams require best-of-breed tools that automate code analysis and software testing.

The LDRA tool suite offers a comprehensive set of competencies from static and dynamic analysis to requirements traceability, unit testing, and verification. It automates all stages of the development process, helping vendors to verify their software from requirements right through the model, code, and tests, to verification. By focusing on the development process as well as accurate coding, LDRA helps clients ensure a sound process while identifying and eliminating errors early, dramatically reducing platform risk and cost of development.

Requirements Traceability
The types of standards discussed above break code development into a methodical, well-controlled process that starts with key elements like certification plans, verification plans, validation plans, and so forth. These documents typically establish objectives, or requirements, that specify the production of various assets and artifacts.

By assets, we mean items principally generated during the design phase, such as system requirements, software requirements, risk and safety documents, source code, etc. By artifacts, we mean items principally generated during the verification and validation of the design, such as test cases, code coverage reports, code analysis reports, etc. These assets and artifacts in turn need to be traced back to the requirements, both to satisfy those objectives and to provide the traceability necessary to confirm the process.

True requirements traceability is a multistep effort that involves linking system requirements to software requirements, software requirements to design requirements, then tying those design requirements to source code and to the associated test cases. Tracing requirements demonstrates compliance to standards, but more important, it guards against missing features and bloated software that includes unnecessary (“dead”) code and/or unnecessary complexity. Requirements traceability ensures that the final system does exactly what is specified—nothing more and nothing less.

Traditionally, traceability between requirements and downstream artifacts has been assumed to take place as a by-product of the development process. The reality is that all too often, explicit trace links are seldom recorded and even if requirements are referenced, they may not be traced in a formal manner that will prove useful at a later date. Whether or not the links are physically recorded and managed, they still exist, though, putting the establishment of a Requirements Traceability Matrix (RTM) at the heart of any project (Figure 1 below ). That’s where automated tools can help, both in building and tracking the RTM.


Click on image to enlarge.

Figure 1: The Requirements Traceability Matrix (RTM) links system requirements through to source code and testing.

这是一份非常间单的说明书  


The right tools for the job
The LDRA tool suite consists of a collection of point tools designed to simplify the process of setting and tracing requirements, developing code, finding and correcting errors, and achieving certification. The tool suite mitigates risk by creating:

* Visibility between groups (e.g., developers and quality assurance) with established project priorities

* The possibility of testing early in the development process to identify performance issues, security bugs or inconsistencies before system integration, and to reduce impact of development flaws on the overall program

* Traceability from source code to high-level requirements in order to verify compliance and ensure coverage (Figure 2 below ).

Figure 2: LDRA’s TBmanager allows developers to demonstrate how assets/artifacts satisfy objectives .
Different tools in the suite perform different tasks such as importing requirements (TBreq), running analysis (TBvision), unit testing (TBrun), etc. Although the tools can be run individually, they are more easily and economically accessed by the user through the TBmanager interface and its GUI.

Designed specifically for safety-critical applications, the LDRA tool suite provides a comprehensive set of solutions to automate and simplify the development and certification of software and products.

The tool suite allows users to start with source documents to extract requirements, create traceability down to the code level, then create verification tasks and test cases that identify errors and demonstrate that the code satisfies the requirement. A discussion of the full capabilities of the tool suite is far beyond the scope of this article, but let’s look at a couple of examples that highlight the possibilities.

The process starts by establishing requirements through TBmanager. TBmanager leverages the TBreq product to capture high-level, derived, and low-level requirements from any management tool and source, while providing an interface for traceability, test-case generation, and requirements verification. Whether users store requirements in text documents, spreadsheets, or requirements management tools, they can use the software to parse requirements and build the RTM, then simply drag a file or function onto a requirement to create a linkage between these two artifacts.

The user interface allows developers to pull in their code base simply by manually pointing to the files or importing using project or make files. The software can perform static and dynamic analysis based on over 800 rules sorted into industry-specific coding standards.

Users can select and apply those rules to their code or create a customized standard that fits their internal process simply by checking boxes. For 10-year-old legacy code, for example, developers can establish minimal guidelines designed to merely clean up the code a bit, while holding new projects to a more strict interpretation.

Once the customer sets up the standard, they can run static analysis on the code at the system level to generate a list of violations against the standard. The code review report is a coding standard analysis that highlights all violations of the coding standard on a line-by-line basis, identifying the exact cause of the error. The screen allows users to view the text of the standard to better understand a particular error (Figure 3 below ). Clicking on the specific violation (circled) takes the user directly to the code where they can fix the source of the error then rerun the analysis to verify that the changes didn't introduce additional problems or violations.


Clickon image to enlarge.

Figure 3: Analysis displayed in TBvision reveals the source of the error for a given line of code. Users can view the standard to understand the problems or click through to the code itself to correct the error. 

The static analysis report also provides a useful artifact for demonstrating compliance in peer review meetings, allowing the discussion to focus on what the code is doing rather than standards.

Static analysis also generates complexity metrics that provides a facility for identifying, and hence reducing, the inherent complexity and risk in the system. A flow graph, for example, shows the branches between statements, allowing software engineers to easily evaluate the functional complexity of the code (Figure 4 below ).


Clickon image to enlarge.

Figure 4: In the tool suite flow graph, nodes represent blocks of code and lines represent the branch points between them. Such graphical representations are very useful for providing a quick overview of function complexity.

Static analysis also produces complexity reports that show where the software violated the standard. These results assist companies in monitoring code quality and maintaining consistent structure-based coding practices.


Clickon image to enlarge.

Figure 5: Quality report generated by static analysis shows areas in which the software exceeded the quality standard. These results can be used to establish and maintain consistent structure-based coding practices.

Quality metrics can be presented in different forms. A user could, for example, specify lower and upper limits of complexity. These kinds of tools can also be used in combination with language subsets and style guides to deliver code that is more readily understood, tested, and maintained.

The LDRA tool suite also performs control flow analysis both on the program calling hierarchy and on the individual procedures. The rules of structured programming are applied and defects reported. The output of the control flow analysis is a call graph, showing exactly which functions are invoked by which others (Figure 6, below ).


Clickon image to enlarge.

Figure 6: Colorized call graph reveals the degree to which statement calls are executed for a given procedure
Static Data Flow Analysis follows variables through the source code and reports any anomalous use. This is performed at the procedure level and also as part of the system wide analysis. This powerful technique can detect a number of serious problems such as variables that are used before they are initialised or an array that is accessed outside of its bounds.

Static analysis by itself is useful but not sufficient. Developers need to be able to perform black-box and functional testing, as well as dynamic analysis. Of course, this testing needs to be performed against the requirements with test outcomes being fed back through the requirements traceability techniques discussed earlier.

Using the LDRA tool suite, developers can perform dynamic analysis, system testing, and even unit testing. The term unit can refer to a single function, a number of functions, a whole file, or even several files. Unit tests can be executed on the host, but preferably also on the target to ensure that each “unit” functions as expected. During unit testing, any missing functions need to be stubbed and a harness created in order to run the tests. Manually creating stubs and harnesses, as well as downloading and executing tests on the target can be a very tedious task, but with the right unit testing tool, all these tasks can be seamlessly automated.

Structural Coverage Analysis (SCA)
The problem with testing is finding a way to ensure that it is sufficient. The solution is to measure the effectiveness of the testing by using Structural Coverage Analysis (SCA). SCA, which uses code coverage metrics, refers to the degree to which the source code of a system has been executed during requirements-based testing (Figure 7, below ). Through the use of these practices, developers can ensure that code has been implemented to address every system requirement and that the implemented code has been tested to completeness.


Clickon image to enlarge.

Figure 7: Structural Coverage Analysis (SCA) reporting reveals which parts of the source could have been executed and which have not. For the function cashregister.cpp, for example, analysis shows 100% statement coverage but only 51% branch/decision coverage.

Clicking on a given line in the report allows the user to drill down to the source code itself, which is colorized to show code coverage. The tool suite utilizes the static analysis information to find the branching points and monitor them to determine when a particular block of code has been executed. This can be run on the host, a simulator, or the target itself. This code coverage information can then be mapped to requirements, demonstrating that testing has completely covered a given requirement.

Selling into safety-critical market sectors increasingly requires certifying software to appropriate standards. Automated tools like the LDRA tool suite streamline the process, simplifying requirements traceability, structural coding coverage, and adherence to coding standards while mitigating risk. With the LDRA tool suite for analysis of C, C++. Java. Ada, and assembler languages, design teams can get their products to market faster and more economically while guaranteeing that their customers will be satisfied and that the product will deliver reliable performance over the long haul.

Jared Fry is a Field Application Engineer for LDRA Ltd . He graduated from Western New Mexico University with degrees in Mathematics and Computer Science. His career began in the defense industry working for Lockheed Martin. There Jared served as a software engineer working on various projects ranging from missile and radar systems, training simulations, and software testing. With LDRA he leverages these experiences as a consultant, assisting clients throughout the development process to produce a quality and certifiable product.

Shan Bhattacharya is a Field Application Engineer for LDRA Ltd. He graduated from Cameron University and began his career in factory automation and robotics. He continued his career with various defense contractors including Lockheed Martin where he served as a Lead Engineer and finished his time as a Deputy IPT Lead. Shan has been with LDRA since 2007 and provides consultation for clients in various industries focusing on requirements management, software certifications, and development best practices.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.