ESC 2008 Preview: Is it time to use some "tough love" on your embedded code? -

ESC 2008 Preview: Is it time to use some “tough love” on your embedded code?


If  T. Adrian Hill is right, the best way to find as many bugs aspossible in your embedded system code may be to use what psychologistsand therapists call “tough love.”

Rather than probe and question the patient and gently dig out theunderlying problems, tough love involves creating a situation in whichthe subject is pushed nearly to to the breaking point where theunderlying problems can be quickly observed and resolved.

At the upcoming Embedded SystemsConference, Silicon Valley, Hill will be reprising hispopular class onStressTesting Embedded SoftwareApplications (ESC-420),” where he will discuss various toughlove techniques for finding what is wrong with software by stressing itto the breaking point.

A member of the Principal Profession Staff at the John HopkinsApplied Physics Laboratory, he has led the embedded softwaredevelopment on numerous NASA missions including the Hubble SpaceTelescope.

Just as the “tough love” concepts are considered beyond the acceptednorms for psychological counseling and treatment, the software stresstesting techniques developed at the Applied Physics Lab by Hill and hiscoworkers are in many ways the antithesis of the traditional softwareacceptance testing and debugging.

In his class he will go into detail on the methodologies developedat the Lab for the stress testing of software included on threerecently launched NASA missions, the MESSENGER mission to Mercurylaunched in August, 2004; the New Horizons mission to the Pluto-KuiperBelt, launched in January, 2006 and the STEREO mission to study the Sunclose up, launched in October of 2006.

In the traditional approach test engineers develop and execute testthat are defined to validate software requirements. “Testing is astandard phase in nearly every software development methodology,” saidHill. ” Test engineers develop and execute tests that are defined tovalidate software requirements. The tests tend to be rigid withspecific initial conditions and well-defined expected results.

The tests typically execute software within the limits prescribed bythe software design. While these tests are often complemented withSystem Tests or Use Case Tests, these higher level scenarios stillconform within the design bounds of the software.”

Software stress testing, however, runs counter to these traditionalapproaches, said Hill. “Stress testing involves intentionallysubjecting software to unrealistic loads while denying it criticalsystem resources,” he said. “The software is intentionally exercised'outside the box' and known weaknesses and vulnerabilities in thesoftware design may be specifically exploited.”

However, there are some caveat's to keep in mind when conductingsuch tests, according to Hill. “Degraded performance of a system understress may be deemed perfectly acceptable, thus, the interpretation oftest results and definition of pass / fail criteria is moresubjective,”he said.

“Furthermore, a test that stresses one aspect of the software maylead to undesirable side effects in another area of the software so theentiresystem behavior as a whole must be evaluated to properly analyze theresults.”

Even so, the results of such testing on the NASA missions, he said,proves the usefulness of this approach. There, the stress testinguncovered a total of 32 problems across the three software efforts,which Hill details and analyses in his class.

Based on his experience, Hill believes software stress testingshould be an essential component for any critical embedded softwaredevelopment program.

“While such testing is typically not written as a formalrequirement, users have an expectation that the software demonstratesthe characteristics of robustness and elasticity in response to anyuser actions,” he said. “Furthermore, stress testing can expose designflaws and software bugs that are not easily uncovered using traditionaltesting methods.

“The problems uncovered in stress testing often involve the complexinteractions between tasks such as missed real-time deadlines,deadlocks, race conditions, and reentrancy issues. A formal andrigorous approach to Software Stress Testing can uncover seriousproblems before the software is released into the user community.”

Other software development, debug and testing classes worth checkingout at the conference include

1) PeerCode Review doesn't have to suck (ESC-300),” presented by JasonCohen.

2)Systemvisibility via call graphs (ESC-340),” a class taught by GeorgeMock.

3)Faultdetection in Embedded Systems (ESC-360),” presented by LorenzoLupini and Massimo Qauagliani.

4)Staticcode analysis for embedded software (ESC-430),” taught by DavidKalinsky.

5)Guideto adopting static source analysis (ESC-528),” presented by NikolaVelerjev.

6)Debuggingthe toughest software bugs using a static analyzer (ESC-550),” aclass taught by David Stewart.

Click here to registerfor the Embedded Systems Conference, Silicon Valley.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.