Changing hardware requires more aggressive testing - Embedded.com

Changing hardware requires more aggressive testing

Today, multicore advancement in hardware has presented developers with a new conundrum, one that requires new coding, specific to the multithreaded applications populating today's latest gadgets. Multithreaded development approaches have increased over the years. Some experienced software developers, who work in an object-oriented and structured manner, think it shouldn't be too big a problem to get “parallel.” However, doing so takes an extensive examination of where the problems are coming from, both for unique applications and in general.

The focus on multithreaded development is yet another contributor to the growing complexity that developers face today, along with outsourced code, legacy code, standards requirements, and basic market pressures for more features in less time. In combination, these factors create an extremely challenging software development environment for companies dependent on software to run their businesses.

The problem with multithreaded programming is in the sharing of data between concurrently running threads. Even the most structured and well-designed object-oriented program, when designed to be run in a single-threaded mode, will still encounter issues when trying to go parallel. Coordinating all the running threads, and the data that they share, leads to a whole new set of problems that previously didn't exist in a single-threaded world.

According to an IDC study on “Improving Software Quality for Business Agility,” eliminating all defects before the software comes into operation would achieve a 32% cost savings. Companies must weigh how high the costs and the involved effort would be to achieve such a feat for addressing multithreaded application defects.

The “zero-defect strategy” companies aim for generally helps set ambitious goals for defect detection, but they never reach the goal completely. This strategy is best applied to multithreaded application testing; complexity can best be addressed by tools with viable concurrency defect solutions. The deadlock and concurrency issues that arise with multithreaded applications can be aggressively tied to the zero-defect strategy, giving back the time lost when a company has to adapt to the new issues relating to multicore.

Some companies are better than others at internalizing their own quality initiatives. Nevertheless, embedded software managers, and subsequently their developers, need to rely on more aggressive testing as hardware continues to morph. Multicore has been the beginning of a series of changes and the time span developers have to adapt to these changes is rapidly getting shorter. This means companies can expect larger margins of error in coding defects for the same quality of developer, leaving more aggressive testing as the tactic to keep the balance on software's integrity.

Ben Chelf is the CTO and co-founder of Coverity. Previously, Chelf was a founding member of the Stanford Computer Science Laboratory team that architected and developed Coverity's underlying technology. Ben is one of the world's leading experts on the commercial use of static source code analysis and works with organizations such as the U.S. Department of Homeland Security, Cisco, Symantec, and IBM to improve the security and quality of software. He holds MS and BS degrees in Computer Science from Stanford University. Chelf can be reached at bchelf@coverity.com.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.