Coverity: open source & proprietary code better than average -

Coverity: open source & proprietary code better than average

Coverity has just made its 2012 Coverity Scan Open Source Report available on line with the details of its analysis of more than 450 million lines of software code through the Coverity Scan service, the single largest sample size that the report has studied to date.

The service, which began as the largest Public/private sector research project focused on open source software integrity, was initiated between Coverity and the U.S. Department of Homeland Security in 2006 and is now managed by Coverity.

Among the key findings in the report is that code quality for open source software continues to mirror that of proprietary Software: and both continue to surpass the accepted industry standard for good software quality.

Coverity’s analysis found an average defect density of .69 for open source software projects that leverage the Coverity Scan service, and an average defect density of .68 for proprietary code developed by Coverity enterprise customers. (Defect density is defined as the number of defects per 1,000 lines of software code and is a commonly used measurement for software quality).

Both have better quality as compared to the accepted industry standard defect density for good quality software of 1.0.

According to Jennifer Johnson, Chief Marketing Officer for Coverity, this marks the second, consecutive year that both open source code and proprietary code scanned by Coverity have achieved defect density below 1.0.

As projects surpass one million lines of code, she said, there’s a direct correlation between size and quality for proprietary projects, and an inverse correlation for open source projects.

Proprietary code analyzed had an average defect density of .98 for projects between 500,000 – 1,000,000 lines of code. For projects with more than one million lines of code, defect density decreased to .66, which suggests that proprietary projects generally experience an increase in software quality as they exceed that size.

Open source projects with between 500,000 – 1,000,000 lines of code, however, had an average defect density of .44, while that same figure increased to .75 for open source projects with more than one million lines of code, marking a decline in software quality as projects get larger.

“This discrepancy can be attributed to differing dynamics within open source and proprietary development teams,” she said, “as well as the point at which these teams implement formalized development testing processes.”

Linux remains a benchmark for quality. Since the original Coverity Scan report in 2008, scanned versions of Linux have consistently achieved a defect density of less than 1.0, and versions scanned in 2011 and 2012 demonstrated a defect density below .7.

In 2011, Coverity scanned more than 6.8 million lines of Linux code and found a defect density of .62. In 2012, Coverity scanned more than 7.4 million lines of Linux code and found a defect density of .66. At the time of this report, Coverity scanned 7.6 million lines of code in Linux 3.8 and found a defect density of .59.

But, said Johnson, high-risk defects persist. 36 percent of the defects fixed by the 2012 Scan report were classified as “high-risk,” meaning that they could pose a considerable threat to overall software quality and security if undetected.

Resource leaks, memory corruption and illegal memory access, all of which are considered difficult to detect without automated code analysis, were the most common high-risk defects identified in the report.

“This year’s report had one overarching conclusion that transcended all others: development testing is no longer a nice-to-have, it’s a must-have,” said Johnson. “The increasing number of open source and commercial projects that have embraced static analysis have raised the bar for the entire industry.

“As we see year-in and year-out, high-risk defects continue to plague organizations; simply put, if you are not doing development testing, you’re at a competitive disadvantage.”

While static analysis has long been cited for its potential to improve code quality, there have been two significant barriers to its adoption by development organizations: high false positive rates and a lack of actionable guidance to help developers easily fix defects.

The 2012 report noted more than 21,000 defects were fixed in open source code: more than the combined total of defects fixed from 2008-2011.

To participate in the survey, developers can register open source projects with Coverity Scan on line.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.