Integrating Quality Models and Static Analysis for comprehensive assessment - Embedded.com

Integrating Quality Models and Static Analysis for comprehensive assessment

To effectively manage software costs it is essential to assess the quality of software. For this, two major ingredients are available today:

(1) Code analysis tools which provide a large range of metrics that can be used as indicators for the quality of a software product. However, these metrics focus on very special aspects of the source code and it is therefore dicult to usethem for obtaining a comprehensive overview of the overall quality of a software system.

(2) Quality models like ISO 25010 and others which define high-level quality attributes, which are commonly used to characterize the quality of software. However, these quality models are too abstract to be operationally useful for the quality assessment of a software system.

The aggregation of the results of heterogenous static code analysis tools to an overall quality assessment of a software product remains a challenge. For tools that address this challenge, adequate user assistance is required forde ning meaningful and comprehensible aggregation specifcations.

Especially for rule-based static code analysis tools that produce rule violation messages (like FindBugs and PMD) associated with code locations (called by us findings) rather than metric numbers, little work exists on howto aggregate them.

Besides the simplistic defect density(number of ndings per lines of code) approaches exist thatallow to specify arbitrary mathematical expressions to aggregate the results. There is no systematic approach forobtaining meaningful and comprehensible aggregation specifications for rule-based static code analysis tools.

An additional challenge is the large number of rules provided by static code analysis tools. Organizing them and working with them is not possible without a mechanism tostructure and classify them in a comprehensible manner.

In this paper we address these problems by defining a quality model based on an explicit meta-model. It makes the modeling information useful operationally, by defining how metrics calculated by tools are aggregated.

Furthermore, we propose a new approach to normalizing the results of rule-based code analysis tools, which uses the information on the structure of the source code in the quality model. We evaluated the quality model by providing tool support for both developing quality models and conducting automatic quality assessments.

Our results indicate that large quality models can be built based on our meta-model. The automatic assessmentshows a high correlation between the automatic assessmentand an expert-based ranking.

To read more of this external content, download the complete paper from the author archives at Tuebingen University, Germany.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.