Layering it on-a new approach to automating system tests - Embedded.com

Layering it on–a new approach to automating system tests

When building software, you always need to know your software's quality. Testing is the obvious vital task in assessing the quality of a product, but it's also a major contributor to product development time. Automating tests can shorten the time your spend on executing test sessions, while improving resource use and extending the number of functions you can validate.

This article presents the pitfalls and challenges posed by creating an automated test strategy for an embedded system, namely a Voice over IP (VoIP) media gateway. The layered approach we devised doesn't necessarily reduce testing effort but instead converts tasks, such as execution, validation, monitoring, and reporting, into software routines.

A layered approach with pluggable components assures scalability and portability, enabling you to test a class of products individualized by the capabilities and hardware platforms they run on. Although the diversity in embedded systems is so great that no one generalized testing method can be applied to all different engineering areas, let alone among products from the same specific area, we believe our basic concept can be successfully adapted to other classes of embedded systems.

Our system under test (SUT)–the VoIP media gateway–consists of a complex integration of software modules and control running on a specialized processor optimized for both signal and packet processing. A common requirement of embedded systems is the ability to process data in real time. A VoIP media gateway processes voice signals within a specified time period, which must be as short as possible to reduce the overall communication delay introduced by packet networks and processing nodes. The real-time environment makes the SUT's behavior more difficult to predict and analyze. Defect occurrence in real-time systems is sometimes nondeterministic, as it becomes visible only when special conditions are met.

We validated the media gateway from a system perspective–we considered all the components to have been unit-tested prior to integration. The output of testing this SUT consisted of media and control packets, digital signals, and debugging/logging information. We based our test approach on standards, reference or relative data, and interoperability with existing equipment.

Why automate testing?
The numerous stories of failed attempts at test automation all begin with an ill-conceived objective for the project. While figuring how much time and money you will save, note that for the most part, test automation doesn't leave testers with more free time on their hands. On the contrary, automating a testing project is a full-time effort, not a sideline task.

The central question is whether your objective is to save testers' time and your company's money during testing phases or rather to end up with a higher quality product that has fewer chances to fail during its service life.

Starting an automated testing project is actually a lot more work than purely a manual effort. Test automation doesn't replace testers, rather it changes their focus. People are needed to program the scripts, set up the test runs, interpret the results, discuss fixes, and so forth. All these make test execution actually a small part of the whole testing effort. Testers typically don't end up with less work to do.

So, why then would you want to automate your testing? See the sidebar below for advantages and disadvantages of test automation; or go to next page to continue the article.

Advantages and disadvantages of test automation
Expectations are always high when the opportunity for test automation comes into view. Most stakeholders believe automation is a panacea for test-related issues. Experience, however, shows otherwise. Plenty of literature is available about lessons learned from test-automation projects. Strong reasons exist for engaging in such a project, but they must be weighed against the serious problems that can arise.

Execution speed is advertised as a major advantage of test automation. Although your tests will probably execute faster, the execution phase is just one piece of the whole story. First you have to develop a testing framework. Once the framework is in place (there should be no need for further work on it other than the occasional enhancement or bug fix), you have to shift to developing test cases and scripts. Test-script development is a continuous task, guaranteed to last as long as the software tested is yet to be in service, and sometimes even after that. Finally, when planning an automation project, don't forget such tasks as interpreting test results and investigating bug. The test cases might be automated, but the final interpretation and verdict are purely subjective.

Functional coverage is another decisive factor in an automation approach. The problem is, the more functionality covered with automation, the more complicated the test programming will be. It eventually boils down to a compromise between functional coverage and test case depth. Using automation, tests can become more thorough by implementing all the actions that a tester might find boring or impossible to perform, for example simulating hundreds of users.

Regression testing theoretically becomes a trivial task when an automation framework is in place. Testing sessions can be run at the “push of a button” and existent test scripts instantly become regression tests for future software builds. If this sounds too good to be true, it is because an inherent problem with test automation is that has to be solved for this scenario to work, namely the interface of the SUT must be isolated from the test scripts.

Pesticide paradox is an issue anyone considering test automation should be aware of. This concept is pictured in the following quote by James Bach: “Highly repeatable testing can actually minimize the chance of discovering all the important problems, for the same reason that stepping in someone else's footprints minimizes the chance of being blown up by a land mine.”4 Simply put, any non-evolving battery of tests will find fewer and fewer bugs over time, reaching a point where they are unable to find bugs anymore. At this point, they become pure regression tests, serving only to assure that further modifications to the SUT don't produce any new bugs in previously cleaned areas.

Focus on developing new test cases is another advantage of test automation. Eliminating the need to execute manual tests leaves more time for designing and implementing new ones. Still, when outlining the automation project, one must keep in mind that existent test scripts will almost certainly need maintenance, including their design as well as their actual code. Existent test cases that run without finding any bugs can instate a false sense of security, even though bugs may lurk in the test designs or test scripts themselves.

Reusability is an advantage of an automated test framework only if it was included in the design requirements from the start. Spending time and effort developing a framework that only tests one product is grossly inefficient, especially when there are other projects that are being developed or will be developed in the same organization. Finally, increased reuse of test cases leads to better test design and better documentation.

An automated framework
Designing a test-automation strategy for a project depends on many factors. However, some basic requirements are universally applicable to any well-designed automated framework:

Test design and automated test framework independence –The test design should follow a previously established naming convention and should not in any way depend on the test framework. The design simply details how a particular feature is to be tested, the steps to be taken, the input data used, and the expected output. These details all depend on the system being tested and assume no knowledge of the automated framework. Furthermore, designing test cases and developing an automated framework require totally different skills on the part of the tester.

Framework portability –A key advantage for an automated test framework is reusability. A framework should be instantly ready to test different versions of the same product, individualized by the contained features. Further still, a framework should be able to test a different product of the same class with only minor modifications. A related requirement is the separation layer between the framework and the SUT; this separation layer can be modified or replaced if the framework is used to test another product.

Maintainability and scalability –This is by no means a requirement specific to test frameworks. Any other software project will probably have such a goal.

The middle abstraction layer between the framework and the SUT is again responsible for ensuring that changes in the SUT interface don't affect all the test cases developed to that point. This happens more often than anyone would like, and the middle layer saves countless hours of adapting the test scripts to the new interface.

Parallel/distributed testing –The automated test framework should be able to perform parallel or distributed testing, if this need arises. The two types of testing differ substantially, which will be detailed later on. For now, remember that parallel testing means running independent test scripts on multiple instances of the SUT and gathering the results. This greatly reduces test session times but requires a whole different level of resource management from the automated framework. Hardware setups need to be managed, and the framework has to be able to instantiate the SUT on these setups. Test cases need to be deployed concurrently and results will have to be gathered at the end of the test run.

Results reporting/data storage –An automated test framework should be able to uniformly read and store test results, provided the test cases can generate them in a known format. Test-session results are an important part of a product's development course and may need to be stored for a long time after the software release. These results are best kept in a structured database, which can also provide remote accessibility. A great deal of collateral data is also generated by the test cases and may need to be stored.

Choosing automation
The first step in choosing an automated test strategy is evaluating the current test case set. Are all the test cases necessary? Do they give adequate payback for the time needed to automate and run them? Are there any test cases that don't lend themselves to automation, either because they're too simple, too particular, or can't be automated in the first place? Which tests require very little human interaction as part of the actual run?

A lingering prejudice states that it's cheaper to develop the software inhouse than to buy it off the market. This is not always the case, since the development cost of the commercial tool is spread over a large number of users, whereas the cost of inhouse development sits solely on your shoulders. That being said, other factors can help you decide which approach is best for your organization.

A commercial tool has the advantage of being available right away, possibly circumventing some time constraints. However, you probably won't escape tailoring the product for your own needs. You'll still have to face such tasks as integrating the tool with your SUT and developing other support modules around them.
An inhouse tool takes some time to develop and could turn out to be more costly. On the other hand, it will exactly fit your needs and will be easy to adapt, enhance, and evolve. Support for it is right in your backyard, as long as it adheres to the maintainability and scalability requirements.

In addition, you might not find any software tool that matches your criteria or your technical field. Our embedded software system (the VoIP media gateway) had specific hardware needs that we tested using other third-party hardware tools.

Architecture example
Our automated testing strategy is logically split in three levels, as seen in Figure 1 .


Figure 1
Click on image to enlarge.

The outer level is an automated testing framework, dealing only with resource management, logging, test execution, and reporting. This layer is completely independent of the tested product. To fulfill its purpose, we chose an open-source project and tailored it to fit our needs.

Automated framework components:
Test reporting services –used to gather and store test-session results and data.
Test execution manager –runs the actual test-case scripts.
Test resource manager –organizes the setups used to run the tests and distributes the test cases among them.
Monitoring –manages the test-session run.
Logging –generates high-level information about the current test-session run.

The middle layer is the testing harness and is dependent on the type of software product tested. We developed this layer inhouse to match our needs. We also had to automate third-party tools used for embedded systems testing by developing wrapper modules used in our automation scripts.

Test harness components:
Test application development kit –a proxy layer between test case and SUT that includes tools and application programming interfaces (APIs) used to control the SUT.
Test instruments –includes software and hardware tools and testing equipment. It may also include special layers developed to provide a general interface for all tools or to control the hardware systems and testing equipment.

The third layer contains the actual test-case scripts and the software libraries they use.

Test-case components:
Test libraries –includes libraries specific to the test cases. It may include configuration file parser, formatters, and converters, among others.
Test cases –the actual test scripts.

The end result is our automated strategy can test multiple versions of the VoIP media gateway (containing different combinations of features) using the same designs and scripts. Given the modular approach adopted in its development, our test strategy can also serve other projects that bear resemblance to an embedded VoIP media gateway system, requiring only some tailoring for the particular project. Evidently, any module dependent on the SUT has to be rewritten, an example being the middle API layer used for controlling the SUT.

Making it portable
To fulfill the goal of testing a class of products using the same automated test strategy, we designed several abstraction layers (see Figure 2 ) for ensuring portability and scalability.


Figure 2
Click on image to enlarge.

SUT independence: A generic interface between the SUT and the test case increases the test case's reusability. One of the purposes of this interface is to hide the actual particularities of the SUT as much as possible, thus offering a higher level of generalization.

As an example, consider the following situation: Product A and Product B can be controlled by commands in the form of User Datagram Protocol (UDP) packets having a special format. Product B has a slightly different packet format, but otherwise the content of the packet is the same. The abstraction layer will perform the translation between a friendlier format and the binary representation of the commands; therefore, the test case may be used to test both products.

Furthermore, an abstraction layer may be used to run the SUT remotely, being able, for example, of running the test cases in one site and the SUT in a different site (for example, the product may be developed on a simulator/FPGA and run on a different site on the actual hardware).

All in all, this abstraction layer may fulfill several goals: hiding or generalizing SUT particularities, offering a friendlier interface to the test case, running the SUT remotely, and isolating the SUT from the hardware.

The solution we devised runs the SUT remotely by using a remote procedure call (RPC) layer, the RPC client being situated on the automated test solution side, as shown in Figure 3 . This way, if the interface on the SUT changes, the only modifications that need to be done are in the middle API layer (the client interface), while the test scripts remain the same.


Figure 3
Click on image to enlarge.

Test independence: To have a strong decoupling between the test cases and the automated test framework, the interface between them should be simple and implementation independent. Such an interface may contain items like startup (for example, powering-up equipments, deploying SUT), run, cleanup (such as delete temporary files, reset equipment), and results gathering.

This interface may be implemented by fully automated test cases or by semi-automated test cases. The semi-automated test cases may use the automated test strategy only for execution or just for recording the results in the database.

Another important factor is test-case extensibility. For this purpose some particularities are exported in configuration files (such as ini files and Microsoft Excel files). With this approach, the features to be tested can be selected only by modifying the configuration files without any changes in the code.

Hardware independence: As the SUT can run on different hardware platforms, in the context of an automated testing strategy, the hardware must also be handled in an automated way. Controlling the hardware automatically or remotely (or both) may include powering-up, resetting and shutting down the hardware, deploying the SUT onto the hardware, and so forth.

The selection of the platform to run a certain test case must be done at the highest level (preferably from the automated test strategy's user interface) and due to the fact that the test case's implementation must not be tied to a specific hardware, we designed an additional layer for abstracting the hardware. This layer exports a general interface to be used by the test cases and hides the particularities of the hardware.

As presented in Figure 4 , the test cases communicate with the hardware abstraction layer indicating the platform on which the test case will run. The hardware abstraction layer selects the appropriate hardware access layer that is used to communicate with the target platform.


Figure 4
Click on image to enlarge.

Restrictions and limitations: Unfortunately a general solution has its own drawbacks and this approach cannot always be used in all types of situations:

• The testing equipment doesn't export an automation interface or the existing support is insufficient.
• The low-level control of the SUT may be limited by the abstraction layers and performing certain tests such as negative testing may not be very easy. Because these layers may perform their own validations of the input parameters or because they may expose a more general interface, it may be impossible for the higher levels to generate certain situations. For such situations, bypassing the abstraction layer is a better choice.
• Some abstraction layers may introduce additional overhead and influence a test case's results (such as test cases that are measuring the product response time to various stimuli).
• The hardware used for running the SUT can't be controlled (or easily controlled) in an automated way.

Creating distributed and parallel tests
In case of distributed testing, a test case has components running on different entities (such as machines, hardware systems, and testing equipment) and in case of parallel testing, the tests are running on different setups simultaneously.

The SUT complexity may impose having distributed test cases with components running on different computers, hardware platforms, and testing equipment. In this situation, the test case's main script has to communicate with remote computers/equipment requiring developing drivers, communication servers, or other hooks (not all equipment offers a software interface for resetting; a possible solution being to cut the power supply in an automated way). Another aspect when having heterogeneous setups is failure recovery, which can be more difficult: resetting remote machines, retrieving results from nonresponding machines/equipment, and stopping/killing hanged scripts on remote machines.

The support for parallel testing is offered by the automated test framework. A setup may contain several computers, hardware platforms the SUT is running on, testing equipments, and network equipments. However, the automated test framework must not be tied to a specific setup; hence it identifies the entire setup as a single entity and leaves the responsibility of handling the existing components within a setup to the test case.

Although parallel testing significantly reduces the execution time for a test session, a reliable cleanup and failure recovery is still necessary. Otherwise, a test setup may become unusable. Therefore, the testing environment has to be brought in the initial state when test case finishes its execution. Also, a mechanism for detecting blocked test cases has to be in place to forcefully stop the test case and perform cleanup.

For debugging purposes, a mechanism of preserving the current state of the system in the moment of failure might be necessary. No cleanup actions are performed in this situation and no other test cases are run on this setup.

Unifying the testing tools
Usually, the test system is more complex than the SUT itself. This is true even if considering only the test execution utilities (such as generators and analyzers) and not including planning and control tools (such as test planners and bug trackers).

An automated test system needs to be able to incorporate all kinds of test instruments: brand ones and inhouse built. Off-the-shelf tools offer the advantage of licensed and proprietary algorithms, together with special hardware that can support heavy testing loads. They also provide an industry qualification for the SUT. On the other hand, custom-built tools may offer better detail when analyzing test data, and sometimes, easier integration and portability.

When it comes to including any tool into an automated test system, one of the first problems encountered is the variety of interfaces that all these tools may have. This is why a “standardized” interface should be added on top of that. This level offers some benefits:

• The user is able to apply the same set of functions to any test tool.
• A tool can be rapidly switched with a new one.

Here are examples of the main types of functions needed to control tools:

• Install
• Run (with mostly generic parameters)
• Wait (in the form of a callback)
• GetData
• StoreData
• End (forced or not)
• Uninstall

Note that the existence of these setup commands (Install, Uninstall) is very important in order to leave a clean test bench after any successful or unsuccessful test.

Automated reporting
As stated before, interpreting test results and investigating bugs form a major part of a tester's activities in the context of automated testing. It's therefore important that the automated test strategy collect and manage all the information generated during a testing session, as it might be relevant to the activities in question.

A well-designed automated test strategy can provide a centralized and uniform method of reporting test-case results. This means that the test script can select key points in its execution, where such results may be reported by calling a dedicated function. We have implemented three types of information in our strategy:

• INFO–logs the steps in test cases and may contain detailed information about the current context.
• ERROR–logs the points where the test case has detected a nonconformance to the test specification.
• WARNING–indicates the points where the results are inconclusive and need special attention on the part of the test investigator.

As a requirement related to automated reporting, storing the test session's results in a database can prove to be helpful, not only for long-term access to the results but also for the convenience of extracting relational information about test cases and test suites. There are many ways of deriving metrics and information about the overall progress of the software quality by using a well-managed database.

Last but not least, test scripts may generate large amounts of data used to compute the test results. This data is used by test validation scripts to check against test specifications and could include network packet captures, text files, images, binary data structures, and so on. This information provides the entire context for performing results interpretation and bug investigation and should be stored for future reference and investigations.

Satisfaction of job well done
Depending on the life span of the product development, automation of testing brings more or less value. A product with a long maintenance period needs repeatability of tests with minimal time consumed. A layered approach to automation covers an important requirement–portability of a test system.

Parts of the presented automated testing strategy can be reused generally for any piece of software (reporting, execution control, resource management), while others offer the ability to test another product configuration, version, or hardware platform with little effort (such as translating the API).

For easy expanding, a certain level of uniformity needs to be imposed (through intermediary layers) to the test-case actions (startup, run, report, and cleanup) and also to the many interfaces of the test tools, as these rarely show API resemblance.

This article detailed our recipe for automating tests on embedded systems; we were able to demonstrate its benefits on Freescale's Media Gateway solutions. Our test execution time decreased significantly, allowing overnight regression after each product build and full sessions running in background, with little need for supervision.

The automated test strategy was easily adapted for two variants of our media gateways that have different media interfaces. It brought a change in tester's attitude, too. Old execution tasks were replaced by new ones: test code development, smart validation solutions, results investigation, and experience-based manual testing. The test engineer didn't become obsolete by the automated testing but instead was enriched with new capabilities and activities.

Adrian Răileanu is a DSP software engineer in the Packet Telephony Systems and Applications team in Freescale Semiconductor, developing and integrating media processing components. He received an MS degree in electronic and telecom engineering from Politehnica University of Bucharest.

Bogdan Ioniţă is a DSP software engineer in the Packet Telephony Systems and Applications team in Freescale Semiconductor, developing system testing automation. He received a BS degree in electronic and telecom engineering from Politehnica University of Bucharest and an MS degree in computer science from the Bucharest Academy of Economic Studies.

Diana Crăciun is a DSP software engineer in the Packet Telephony Department in Freescale Semiconductor, developing and integrating media processing components. She received an MS degree in computer science from Politehnica University of Bucharest.

References:
1. Nagle, Carl. “Test Automation Frameworks,” SourceForge, http://safsdev.sourceforge.net/FRAMESDataDrivenTestAutomationFrameworks.htm

2. Kandler, Jim. “Automated Testing of Embedded Software–Lessons Learned from a Successful Implementation,” www.stickyminds.com/sitewide.asp?Function=edetail&ObjectType=ART&ObjectId=2049 (This paper originally presented at a STAR conference. )

3. Mar, Wilson. Automated Testing page on WilsonMar.com), www.wilsonmar.com/1autotst.htm

4. Bach, James. “Test Automation Snake Oil,” 1999, www.satisfice.com/articles/test_automation_snake_oil.pdf

5 thoughts on “Layering it on–a new approach to automating system tests

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.