Code complexity for embedded software makers sure has changed - Embedded.com

Code complexity for embedded software makers sure has changed

Today, keeping high software integrity requires breaking down the new software quality problems that companies are facing, and determining where additional complexity stems from. The goal for software management is to assess what types of new software quality measures are being taken to combat the increasingly prohibitive costs of software defects.

According to an IDC study on Improving Software Quality for Business Agility, the problems with software quality are caused by growing code complexity, off-shoring, outsourcing, obsolete code, recourse to open-source code, and increased multi-threading applications. Code complexity has been on the rise since Charles Babbage, or at least since Konrad Zuse–it's commonly wondered why no one has found an answer for this very old problem.

The reality is that “finding an answer” for software complexity can't be done, simply because software complexity is not a finite problem. Several factors differentiate how code complexity is increasing now (in 2008-2010) verses earlier challenges. Today, the technology industry has embraced numerous technological innovations that speed applications and increase the performance of the World Wide Web. However, riding shotgun to these advancements has been a number of changes that intensify complexity: increased compliance regulations; stricter language standards; and new development paradigms such as SOA, Web 2.0, SaaS models, and cloud computing. Each of these changes is uniquely designed and is broadening complexity challenges and security concerns.

Consider the fact that software is so much more prevalent now than it was even 10 years ago. Mobile computing, the Internet, kiosks at airports, and the list goes on. This all means that the demand for software is skyrocketing–hence vendors respond by the need to outsource, internationalize, and leverage existing frameworks for their development, to be competitive in the constant time-to-market wars. Almost every business must be concerned with software development in some way now. This was much less true in the 20th century.

Likewise, geographically distributed development is not new. Since the 1970's, this has been functioning on a large scale. The same is true for outsourcing. Nowadays, why do these facts implement such big problems, even if they are beneficial for the industry?

Adoption of complex software sourcing has scaled exponentially as globalization has consumed technology and business organizations. New development sourcing opportunities became so attractive to the bottom line, organizations were apt to overlook how complexity could affect the software development process in the long run. Until recently, methods of intra-team communication ceased to scale with organizations, making coordination among different silos of the software production cycle more difficult; driving complexity to levels not seen previously.

We must also consider the volume of code being produced. In the 1970's, development projects contained lines of code in the thousands to hundreds of thousands, with the exception of extreme software development projects (e.g., AT&T's phone switch). These days, it's not unusual for projects to have millions to tens of millions of lines of code. For example, the today's luxury car has about 10 million lines of code. Coordinating the efforts of development for these types of code bases is simply a different problem by virtue of their scale.

Changes with the development process, resources, and the increasing density of embedded code, now demand a wider understanding and skill set from developers. It also requires a reshuffling of development approaches to combat expanding complexity. Luckily, emerging development technologies such as static analysis, application architecture analysis, and software readiness management are helping assist developers with the complexity they face, and increasing the efficiency of embedded software development.

Ben Chelf is the CTO and co-founder of Coverity. Previously, Chelf was a founding member of the Stanford Computer Science Laboratory team that architected and developed Coverity's underlying technology. Ben is an expert on the commercial use of static source-code analysis and works with organizations such as the U.S. Department of Homeland Security, Cisco, Symantec, and IBM to improve the security and quality of software. He holds MS and BS degrees in Computer Science from Stanford University. Chelf can be reached at .

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.