Model-based testing of a state-machine-based PLC design

Software testing has the goal to detect failures i.e. to detect differences between the specification and the actual implementation of a module, subsystem or system under test.

In [1] Utting and Legeard define model based testing as “the automation of the design of back-box tests ” and further “…in addition white-box coverage metrics can be used to check which parts of the implementation have been tested so far and if more tests are required “.

MBT is a relatively new topic and one can find different definitions of it. But in general it can be said that the main aspect of model-based testing is to automate the generation of test cases from explicit behavior models such as state machines. Therefore the focus of this article is on model based testing of state machine based software.

For the rest of this article, a PLC-based sump controller is used as an example that was inspired by a book on Real-Time Systems from Burns and Wellings [2 ]. The system is shown in Figure 1 below.


Figure 1: Sump controller with a level and methane sensor.

As shown, a controller monitors the methane and the water level in a sump. Whenever the water level is above and the methane level is below given limits the pump starts. If the water level is low again or the methane level is above a critical level the pump is stopped. A basic state diagram that implements the required behavior is shown in Figure Two. It is a relatively simple machine without hierarchy or other advanced features of UML state machines.


Click on image to enlarge.

Figure 2: Simplified state diagram of the sump controller.

When testing state based software it is important to understand how a state machine can fail. In the book of Binder [3] the following main problems are listed:

1. Missing transition (the machine does not correctly change state in case of a correct event)

2. Incorrect transitions (the machine ends in the wrong state)

3. Hidden transitions not shown in the state machine model. (i.e. the implementation does not reflect the state machine model.)

4. Missing or incorrect events or conditions triggering a transition

5. Missing or incorrect actions in a transition or when entering or leaving a state

6. An extra state or a missing state. (i.e. the implementation does not reflect the state machine model.)

7. Weak implementation of the machine. (E.g. it can't handle illegal events.)

Some of these problems can be avoided by using checklists or generate the state machine code from the model. But hand-made checks are time-consuming and the result is very dependent on the reviewer. In practice a tool is needed to ensure that checks are really performed.

The classic testing process
Before looking how a model based testing process can look like let’s first take a look at the classic testing process. It consists of the following main steps:

Step #1. Develop a state machine model that is precise enough. (i.e. it covers the relevant aspects that should be tested.) If the state machine model is also used to generate code from it you can assume that it is precise enough. But in general you can't assume this. We come back to this topic later on.

Step #2. Design of the test cases, the definition of test input data and expected test results based on the specification and test objectives. This is usually manual work and can take quite some time even for a mid-sized state machine. A commonly used approach is to go through every state transition with a highlighter in hand, crossing off the arrows on the state transition diagram to indicate which transitions were already covered in a test case.

This approach was described in an earlier article on www.embedded.com [7] . Figure 3 below shows the sump controller's state diagram with one first test route highlighted in yellow.

More test routes must be found until a defined coverage criterion is met (e.g. all transitions and states must be visited once). The results of this task are abstract test cases that cannot be executed yet.


Click on image to enlarge.

Figure 3: The state diagram of the sump controller with a test route marked in yellow.

Step #3. Implementing the tests: The abstract test cases defined in step two must be transformed into executable test cases. This step depends a lot on the concrete system under test. Before the implementation can start some decisions must be taken. E.g. how should the machine be stimulated, how can reactions be observed and how can the generated output be traced.

Step #4. Execute tests & compare real with expected outputs in a test harness. The tester has to run the test cases step by step and write down the test results. This is the basis to determine when to terminate testing and release the software or to revise the model (i.e. fix bugs) or to generate more tests.

Steps 3 and/or 4 might be automated to make testing faster and to reduce the effort needed for regressing testing. For that purpose a test execution environment and test adapter is needed. (i.e. it is not anymore necessary to watch that the pump is running and to make a tick in the test list.)

The model based testing process
As already mentioned there is not one definition of model based testing. But at the minimum at least the generation of test cases is automated. The more of the other steps are automated the better. The following Figure 4 below shows a possible model-based test process.


Click on image to enlarge.

Figure 4: Activities of a model based testing process.

On the first view the activities look not very different from the manual process just described, in three ways: 1) Model checks and test case generation; 2) From abstract test cases to concrete test cases; and 3) Test script execution.

1) Model checks and test case generation . A state machine model is used to generate test cases. Dedicated graph search algorithms are used to find test routes. For each step on the route the trigger, the expected source and target state and the expected output is collected.

An algorithm can generate tests to traverse all transitions (100% transition coverage) or all states (100% state coverage) to name two possible and commonly used options. Most state machine models are incomplete because typically only a few of all-possible trigger/state pairs are of interest and thereof modeled. For state diagrams developed for code generation this is not a problem.

But for state diagrams modeled for test case generation it is highly recommended to also model implicit state transitions. This allows testing if the machine reacts correctly to implicitly rejected events.

See Figure 5a below for a state transition diagram with some implicit transitions added in comparison to Figure 2. The test route algorithm generates 14 test routes from the enhanced machine instead of 7 for the diagram in Figure 2.

On the test routes the expected output must be defined per test step. The expected output is the basis to determine success or failure later on. According to [5] “a fundamental assumption of this testing is that there is some mechanism, a test ‘oracle’, that will determine whether or not the results of a test execution are correct, – something that defines/identifies the expected outputs.”

Due to the fact that the real output is probably not explicitly visible in every state it is difficult to automatically generate the expected output data from a standard UML state diagram.

A simple method to overcome this problem is to annotate the state model with comments specifying the expected output data. E.g. “Lamp is On” or “Lamp is Off” or a more formal notation can be used for that purpose.

Figure 5 below shows the annotations in yellow and the generated Excel sheet containing the test routes. A route is a list of test steps each showing the present state, the trigger with guard condition and the next state. The last rows show the expected output in the present and the next state.


Click on image to enlarge.

Figure 5 a. Enhanced state diagram containing also implicit transitions that are not needed for code generation but are important for test-case generation.

.An aspect of state machine based testing that is often neglected is the need for automatic quality insurance of the state machine model itself that is the basis for the test case generation.


Click on image to enlarge.

Figure 5b. Test routes for the sump controller generated with the Sinelabore [6] tool .

For UML state diagrams the OMG has specified a set of well-formedness rules within the UML specification. A model checker should automatically check these rules as well as a number of additional rules. The following overview lists some rules – state, transition and choice related that can be automatically checked

State related rules:
* State names must be unique
* States must be connected by a sequence of transitions outgoing from an initial state (connectivity)
.* Normal states should not have only incoming or even no transitions connected at all (isolated states).
* Composite states should have more than one child state. If only one child state is defined the composition is superfluous and just creates unnecessary complexity.
* Initial states must be defined on every state hierarchy and must have exactly one outgoing transition.
* Final states must only have incoming transitions.

Transition related rules:
* A transition must start and end in a state and a trigger must be present (with some well defined exceptions)
* Transitions leaving the same state must not have the same trigger if no guard is defined

Choice related rules:
* A choice must have only one incoming transition
* A choice should have at least two outgoing transitions otherwise it is useless and should be replaced with a normal transition.
* Every outgoing transition from a choice must have a guard defined
* One default transition must be specified for a choice state – i.e. the guard is defined as 'else' (default from choice).

Some of the rules just described seem to be trivial but are nevertheless very useful to check. Other rules would be very difficult to check on source code level (if possible at all) but can be easily checked on model level.

General rule: A state machine model used for test case generation must pass the model check without errors! Btw. this is of course also true for models used for code generation.

From abstract test cases to concrete test cases
From the step just described earlier, we know the tests to be performed. The next step is to (automatically) transfer these abstract tests to concrete ones that can be really executed e.g. with the help of a test script. In the embedded domain software usually is hardware dependent and expects a specific environment (connected sensors, actuators …) to run.

In case of the sump controller example the filling level of the tank, the motor status, the methane level etc. would be required for a real test. Therefore it is necessary to decide in which environment your tests should run. Below the two possible extremes are listed:

* Abstracting from the hardware : This requires breaking the hardware dependencies of the software with the help of test drivers etc. This approach is discussed in great detail in book ‘Test-driven development for embedded C’ [4].

* Running the tests on the real hardware in the real environment: This requires a test script to stimulate the sump controller with real sensor signals, provide real actuator feedback and to capture the real outputs. This is not always possible because of cost reasons or because the hardware is not yet available. But it gives usually best results because the system is tested in the context of its later usage.

In both cases test scripts are required which are some form of executable code that executes the test case and collects the output for the test report. In the first case test scripts might be written in C and executed with the help of a testing framework (e.g. “Unity” or “CppUTest”). In the second case the test script might be written as PLC (programmable logic controller) program that is used to control the environment of the sump controller.

From this discussion it becomes clear that it is hard to provide a generic solution for the generation of concrete test cases from abstract ones. But for large projects or teams that works for a long time on the same system (or new generations of it) the work to develop such a generator can be worth the effort.

Figure 6 below shows how a PLC based adapter might look like for the sump controller. The PLC can control the different sensors and actors and can also introduce faults (e.g. missing sensor signals). In addition it also has to control a motorized valve to increase the water level in the sump. At the same time the PLC can be used to capture the outputs of the sump controller for later reference and test result evaluation.


Click on image to enlarge.

Figure 6: PLC (programmable logic controller) acting as test adapter for the system.

3) Test script execution. The decision in the previous step might often be to go for a manual implementation of the test cases. But the automatic execution of test cases usually pays off very soon despite the effort.

Automatic execution of tests allows developers to quickly test changes, supports a more incremental development process and early starting the testing.

If tests fail because functions are not yet implemented they can be easily retested as soon as they become available. But the biggest benefit of automatic execution is that regression tests are easily possible.

Conclusion
As you have seen there are some good reasons to use state machine models as basis for the testing work. Having the full test process automated allows running numerous tests in a short time without manual coding work.

But especially the step from abstract test cases to executable test cases is not easy to realize in a generic way. Most probably this step always needs at least some adaptation of a testing toolkit for a concrete system under test.

Model based testing requires some initial investment in new technologies, tools and processes. But it offers the chance to cut test effort and to improve quality due to the systematic test approach.

Peter Mueller has been an embedded systems developer for nearly 15 years, involved in projects in the area of public transportation, instrumentation and process automation. During this time Peter was involved in several initiatives to improve the embedded software quality. The test cases presented in the article were generated with the code generator from www.sinelabore.com. Peter can be reached at pmueller@sinelabore.com.

Other Embedded.com articles by Peter Mueller include:

1. Tracing of the event flow in state-based designs
2. State charts can provide you with software quality assurance
3. Generate efficient state charts in C using UML tools

References
[1] Practical Model Based Testing, Mark Utting and Bruno Legeard
[2] Real-Time Systems and Programming Language, Burns and Wellings
[3] Testing Object-Oriented Systems , by Robert V. Binder
[4] Test-driven development for embedded C , James W. Grenning

[5] Gold practices about model based testing ( provided by the DoD)
[6] State machine code and test case generator
[7] Hardware in the loop simulation by Martin Gomez

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.