Verifying embedded software supply chains -

Verifying embedded software supply chains


Software typically comes from diverse sources and is growing in complexity. Having a plan helps ensure that the pieces fit together.

Industrial manufacturers depend on streamlined supply chains to build products. Retailers live and die based on their optimal supply chains. Likewise, manufacturers of electronic products, such as mobile handsets, consumer electronics, communications equipment, and other intelligent devices, rely heavily on distributed developers, outsourcers, software vendors, and even open source code. Then, why not treat this software ecosystem as an embedded software supply chain and try to optimize it?

As software content increases in embedded devices, companies employ more engineering resources across the globe, license more third-party software, leverage open source code if possible, and outsource more software development and testing. With all the available resources of the software supply chain, why are engineering organizations still struggling to deliver high-quality embedded software on time?

One reason is the complexity of managing, integrating, and verifying software from so many diverse software sources, each with its own software methods and practices. If software is coming from different sources, dependencies and interaction issues grow exponentially, resulting in unforeseen incompatibilities and delays.

Another major factor is a fundamental shift occurring in embedded software. The software within many embedded devices is no longer a static, fixed-function environment, but rather a complex software platform upon which third parties develop and deliver additional content and applications. This shift has significant long-term implications for all participants in the software supply chain as software engineering teams attempt to test and verify the quality and integrity of integrated software.

Clearly, current methods and practices aren't scaling in the face of these new complexities. In response, software organizations must implement new processes, best practices, and technologies to optimize the software supply chain and more effectively support large-scale development of embedded software platforms.

Software complexity rises
It should come as no surprise that embedded software is getting more complex. For example, our cell phones not only make calls, but send and receive text messages, surf the Internet, take photographs, and play music. While these devices pack more hardware into a given space to enable these features, they are fundamentally driven and operated by software. We've gone from a few thousand lines of code in mobile phones 10 years ago to millions of lines of code today, running on full-fledged, 32-bit, multitasking embedded operating systems with memory protection.

Another factor is that software functionality must often be ported and integrated across many target platforms and products. Software components run on not just one device, or even a single family of closely related devices. For example, the same multimedia messaging service component may be integrated into all of a company's mobile devices, even those that use other embedded operating systems or underlying software platforms. As a result, software teams staff more engineers, leverage more development resources in different locations around the world, outsource more parts of the development effort, integrate more major functional components from third party software vendors, and take advantage of open-source software.

In other words, code comes from many different sources and is used on many different targets. The processes and standards under which that code was developed and verified can be radically different. The various software components must be integrated, and tested to assure a consistent quality level before the product can ship.

Old practices break down
Assembling and integrating software from disparate sources adds substantial complexity to development, so it's no wonder that engineering organizations are continually challenged with delivering high-quality embedded software on time. The problem is compounded by old engineering processes and techniques that worked well enough when software projects were smaller and simpler, but don't scale sufficiently for today's bigger and more complex efforts.

From numerous customer engagements, we've observed some consistent patterns in the embedded software development and verification processes, whether the team has 30 engineers or 300+. Embedded software projects generally progress smoothly through the implementation phase, which is relatively brief. However, projects then enter long and unpredictable integration and system test phases, with disproportionate time and resources allocated to these late stages.

There's a lot of “churn” during these stages where, over numerous cycles, engineers triage, fix, and verify large numbers of defects and resubmit code to be retested, only to uncover other errors and newly broken functionality. Because there's little visibility into software quality until that time, when the software is first extensively tested as an integrated whole, system test can drag on seemingly forever. Amazingly, we've seen integration and system test take five to ten times as long as implementation.

Development teams also use different and often nonrepeatable methods for testing software. By creating their own test harnesses, or even using individual test harnesses, they're creating tests that can't be reused outside of that functional group, and that specific software component. Further, tests from multiple development teams across the software organization can't be aggregated together into a comprehensive regression suite. Rather, subsystem and component-level tests are rewritten or not run at all, once the system is integrated.

This has a number of implications to product quality and availability. First, product delivery dates often aren't met, with the delays coming at the eleventh hour. Second, building and integrating software components becomes a black art, not well understood by anyone not directly involved in the process. If the process is depends on specialized knowledge by one or two people, it's prone to nonrepeatability and breakdown.

The end result is poor quality, or worse, unknown quality for the final product. There's simply poor visibility and predictability of software health and completeness. If tests are run manually and can't be easily repeated, in-depth testing isn't possible. Too many defects are found in the later phases of testing and verification, as illustrated by Figure 1. This invariably causes costly rework late in the engineering lifecycle, and often pushes back the product release date.

View the full-size image

The paradigm shifts
Why is this happening? Today, an embedded software organization isn't just overseeing design and development internally, but also managing a software ecosystem that includes its own distributed teams of engineers along with external suppliers. Throw in open-source and legacy code, and you have software components coming from different directions and sources that need to work together, often across multiple hardware and operating-system platforms.

Today, getting a code base developed and integrated into an embedded device is only one among many critical tasks for the engineering team. Other key activities include managing dispersed technical teams, integrating code developed using different standards and tools, managing partner and vendor relations, and overseeing multiple product releases.

In effect, this ecosystem has become a software supply chain where different software components are “sourced” and then assembled, integrated, and verified to build a product. This introduces challenges that are typical of manufacturing and supply chain management, such as the dependencies on deliverables from vendors and their impact on the development schedule, different or unknown quality standards and practices, and cultural differences, not only between engineers in different countries, but also between commercial and open-source philosophies. These challenges mean that new processes and best practices are needed to effectively manage the software suppliers, both internal and external.

Complexity is compounded still further because an embedded system is now more than an application or two; it's a platform upon which both you and others build custom software and even hardware to target much more specific uses than in the past. Embedded devices no longer have static fixed-function software environments; they're now software application platforms upon which third parties build additional applications and content, often with extreme time-to-market pressures.

But working with a platform is substantially different than just writing applications. With an application platform, you never know exactly what apps or content or combinations may be present. If you want to control and deliver high-quality user experience, how do you validate the software when it's a sophisticated, rich application platform?

All of these changes in engineering and product development mean that software verification strategies must evolve to keep up. Relying on a product test team late in the development cycle, deploying traditional “black box” testing techniques, isn't good enough anymore.

Verifying the supply chain
A new model is required to significantly optimize the delivery of highly integrated, complex embedded software. To address the urgent challenges observed in the integration and system test phases, key objectives of the new model would be to validate software components earlier, reuse and integrate code more effectively, automate more test processes, and increase visibility into software quality earlier in the development cycle.

Using principles of supply-chain management, it's our contention that all software suppliers, both internal and external, should create and deliver test assets with their software. Test assets are fully automated and reusable tests that have lasting value because they can be leveraged throughout the development lifecycle, in derivative projects, and by other functional teams to integrate, triage, and verify software. Test assets also provide detailed and ongoing visibility into the health of the software deliverables.

To support the creation and leveraging of test assets, a software verification framework and new processes are required. The verification framework provides the common infrastructure to effectively reuse, automate and execute tests from all developers, and support test management and reporting. It also must facilitate different types of testing, such as unit level and API-level testing. Further, the verification framework must scale well beyond a single engineering group and be able to aggregate test assets from both internal engineering teams as well as external software suppliers.

New processes would then leverage, reuse, and apply automated test assets as much as possible to continually verify software at various stages and levels—by individual developers, by functional teams, on integrated engineering builds, hourly, daily, and so forth. With automated reporting, the processes would also provide comprehensive, quantitative metrics and visibility into quality and completeness at various stages of development.

This new approach also requires a cultural change. Engineering teams (suppliers) must change the way they approach embedded software development by treating applications and even individual software components as products in and of themselves. In essence, developers need to ask what they can do to maximize the quality, reusability, portability, and ease of integration for their software being supplied. Engineers tend to create tests with no expectation that these tests would or should be reused by other teams or downstream processes. Developers' test code is generally “throw-away” code, rarely reused by others, and not easily automated as part of an aggregated portfolio of tests.

With a unified verification approach, however, engineers should consider how a component needs to be tested and integrated upfront. But more importantly, developers should now develop tests that are reusable, automated, and have lasting value.

To begin implementing this new verification strategy, start with the most important suppliers, the internal development teams. Starting with one functional team, developers can work together to define the tools, coding standards, test requirements, test asset management, gating conditions, policies, and other aspects of defining a common test framework and process. The new test framework and processes help enable developers to reuse, share, and automate tests, and leverage opportunities to aggregate tests and automatically execute them for integration or regression testing.

With the initial deployment, keep in mind that metrics are an essential part of establishing and working with new processes. Much of the success or failure of processes depends on the culture of the engineering organization, and no single prescription works for all. Only by measuring can you identify what can be improved.

What should you measure? The best approach is to look for quantifiable factors that have a large impact on the ability to deliver high-quality on-time software. These include rates of defects found and fixed, when defects are identified, how many test cases are run, code coverage statistics, and the time and resources allocated to testing or defect resolution activities. Based on these results, you can adjust the process and address specific areas of concern.

Once internal processes are producing satisfactory results, the next step is to align external suppliers who deliver major software functionality that need to be integrated and verified. By requiring suppliers to use the same verification platform, standards, and processes, they can deliver reusable test assets that can be automated along with your own. This lets your engineering teams accept and integrate the deliverables and updates of software suppliers, because they'd be using the same automated tools and techniques as the internal engineering team.

Unifying the verification strategy for the software supply chain within your own engineering organization, as well as aligning external suppliers around the world, is a large undertaking. However, the operational benefits can be substantial. Engineering organizations would eliminate most manual testing, automate integration, efficiently support multiple product releases, optimize software reuse, and effectively manage its software suppliers.

The value in doing so can be observed in the software development cycle itself. As shown in Figure 2, the integration and verification phases using a unified and automated verification approach can find defects that typically aren't detected until later in the development cycle.

View the full-size image

While implementing a unified automated verification and test strategy should be implemented over time in well-planned phases, the benefits can be immediate. By finding software defects earlier, when they can be addressed at a lower cost, engineering teams can regain control of the product schedule and ship a higher quality on time. And that improves customer satisfaction, adding still more to the value of adopting software supply-chain verification processes.

Mark Underseth has over 20 years of experience in the design of embedded communication devices, including mobile satellite, automotive, and digital cellular products. Mark has a BA and MS in computer science, has been awarded three patents, and has five patents pending in the area of embedded systems. He is CEO of S2 Technologies and can be reached at

Reader Response

While the article addresses the problem of diverse software origin, it fails to show a possible solution. The “unified verification framework” hardly can be imposed upon open-source, or most 3rd party vendors of software components.A standard for testability of software components could do the trick?

-Bernhard Kockoth
Karlsruhe, Germany

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.