Automated tools streamline software test and certification
The right tools for the jobThe LDRA tool suite consists of a collection of point tools designed to simplify the process of setting and tracing requirements, developing code, finding and correcting errors, and achieving certification. The tool suite mitigates risk by creating:
* Visibility between groups (e.g., developers and quality assurance) with established project priorities
* The possibility of testing early in the development process to identify performance issues, security bugs or inconsistencies before system integration, and to reduce impact of development flaws on the overall program
* Traceability from source code to high-level requirements in order to verify compliance and ensure coverage (Figure 2 below).
Figure 2: LDRA’s TBmanager allows developers to demonstrate how assets/artifacts satisfy objectives.
Different tools in the suite perform different tasks such as importing requirements (TBreq), running analysis (TBvision), unit testing (TBrun), etc. Although the tools can be run individually, they are more easily and economically accessed by the user through the TBmanager interface and its GUI.
Designed specifically for safety-critical applications, the LDRA tool suite provides a comprehensive set of solutions to automate and simplify the development and certification of software and products.
The tool suite allows users to start with source documents to extract requirements, create traceability down to the code level, then create verification tasks and test cases that identify errors and demonstrate that the code satisfies the requirement. A discussion of the full capabilities of the tool suite is far beyond the scope of this article, but let’s look at a couple of examples that highlight the possibilities.
The process starts by establishing requirements through TBmanager. TBmanager leverages the TBreq product to capture high-level, derived, and low-level requirements from any management tool and source, while providing an interface for traceability, test-case generation, and requirements verification. Whether users store requirements in text documents, spreadsheets, or requirements management tools, they can use the software to parse requirements and build the RTM, then simply drag a file or function onto a requirement to create a linkage between these two artifacts.
The user interface allows developers to pull in their code base simply by manually pointing to the files or importing using project or make files. The software can perform static and dynamic analysis based on over 800 rules sorted into industry-specific coding standards.
Users can select and apply those rules to their code or create a customized standard that fits their internal process simply by checking boxes. For 10-year-old legacy code, for example, developers can establish minimal guidelines designed to merely clean up the code a bit, while holding new projects to a more strict interpretation.
Once the customer sets up the standard, they can run static analysis on the code at the system level to generate a list of violations against the standard. The code review report is a coding standard analysis that highlights all violations of the coding standard on a line-by-line basis, identifying the exact cause of the error. The screen allows users to view the text of the standard to better understand a particular error (Figure 3 below). Clicking on the specific violation (circled) takes the user directly to the code where they can fix the source of the error then rerun the analysis to verify that the changes didn't introduce additional problems or violations.
Click on image to enlarge.
Figure 3: Analysis displayed in TBvision reveals the source of the error for a given line of code. Users can view the standard to understand the problems or click through to the code itself to correct the error.
The static analysis report also provides a useful artifact for demonstrating compliance in peer review meetings, allowing the discussion to focus on what the code is doing rather than standards.
Static analysis also generates complexity metrics that provides a facility for identifying, and hence reducing, the inherent complexity and risk in the system. A flow graph, for example, shows the branches between statements, allowing software engineers to easily evaluate the functional complexity of the code (Figure 4 below).
Click on image to enlarge.
Figure 4: In the tool suite flow graph, nodes represent blocks of code and lines represent the branch points between them. Such graphical representations are very useful for providing a quick overview of function complexity.
Static analysis also produces complexity reports that show where the software violated the standard. These results assist companies in monitoring code quality and maintaining consistent structure-based coding practices.
Click on image to enlarge.
Figure 5: Quality report generated by static analysis shows areas in which the software exceeded the quality standard. These results can be used to establish and maintain consistent structure-based coding practices.
Quality metrics can be presented in different forms. A user could, for example, specify lower and upper limits of complexity. These kinds of tools can also be used in combination with language subsets and style guides to deliver code that is more readily understood, tested, and maintained.


Loading comments... Write a comment