Is massive parallel regression testing for embedded software a dream or reality?

October 18, 2013

Victor_Reyes-October 18, 2013

In my previous blog, I discussed how, in my view, automotive testing is evolving from a half-physical, half-virtual testing environment, such as hardware in-the-loop (HIL), to a fully simulation-based environment using virtual prototypes to replace the engine control unit (ECU) hardware. The goal is to frontload the creation and validation of the tests, freeing up the more scarce and costly HIL equipment to purely execute the tests. However, this might be just the "tip of the iceberg" on the possibilities of a fully virtual HIL (vHIL) environment. Let me explain why.

Today, automotive companies rely on HIL to perform the big bulk of their embedded software testing. HIL testing is a sequential process where hundreds of thousands of tests are running one after another. Running all tests required for a software "variant" takes days or even weeks to complete. A variant is a different configuration instance of a software platform shared across an organization for different end products (for example, high/mid/low end and country-specific regulations). Hardware/software integration and testing takes place for a specific variant and any bug discovered and fixed during this phase has implications for the entire software platform. A bug fixed in one variant can have negative effects in multiple other configurations. Moreover, once a bug is fixed and the software modified, the testing process has to start again from the beginning.

This testing process is obviously becoming a major bottleneck during the automotive software development cycle. Scaling this approach to reduce the testing time is not easy since HIL infrastructure is costly. And not just the HIL boxes themselves, but also the maintenance cost (space, energy consumption, etc.) associated with hardware labs.

But, what about if you could massively parallelize this process and reduce the time to fully test a software variant from weeks to hours? Well, this is not a dream as you may think; this is possible today with vHIL technology. Being simulation-based, a testing method like vHIL can be replicated as many times as the computer infrastructure supports. And you don't even need to touch your computing infrastructure. Nowadays, cloud computing has lowered the cost of computational power to incredible levels while offering high security standards. If you still don't believe me, check out this work done at Hitachi.1,2 Hitachi was able to run approximately 700,000 tests in one night (12 hours), using a parallel vHIL environment (consisting of 600 simulations), hosted in a public cloud computing environment, which proves the readiness of the technology for this type of use-case.

Thinking outside the box, another opportunity with great potential is to link this type of massive parallel testing with software version control tools. Today, adding or modifying code in a stable software platform may take days until all dependencies are sorted out. Even then, there is a high risk that a code modification will introduce bugs in another “variant” that can only be detected later during HIL testing. At that point it will be hard to trace back to the code modification that triggered the bug. All these problems would disappear if before checking-in new code, all tests for all variants could be run over night. And this is not just for the high-level application software, but for the complete embedded software stack compiled for the target microcontroller; the exact same software binary that will eventually execute on the car.

Parallel regression testing for automotive embedded software has massive opportunities in my opinion, but what do you think? What other applications for this technology are crossing your mind?

Victor Reyes is currently a technical marketing manager in the System-Level Solutions group at Synopsys. His responsibilities are in the area of virtual prototyping technology and tools with special focus on Automotive. Victor Reyes received his MsC and PhD in electronics and telecommunication from University of Las Palmas, Spain, in 2002 and 2008 respectively. Before joining Synopsys, he held positions at CoWare, NXP Semiconductors, and Philips Research.

Endnotes:
[1] “Model-based Fault Injection for Large-Scale Failure Effect Analysis with 600-Node Cloud Computers”, Y. Nakata, Y. Ito, Y. Takeuchi, Y. Sugure, S. Oho, H. Kawaguchi and M. Yoshimoto.
[2] www.hitachi.co.jp/New/cnews/month/2011/12/1202.html
<

Loading comments...

Parts Search Datasheets.com

KNOWLEDGE CENTER