Transitioning from code-based to model driven software testing: Part 3 - Embedded.com

Transitioning from code-based to model driven software testing: Part 3

A Test case can be executed on the development host or, usually at alater stage, on the target platform. In cases where the Test casefails, users can go into the code and debug it at the code level.

But just as MDD offers the ability to debug the design at the modellevel, so is the case with Model Driven Testing. One way to debug theTest case is to generate an animated sequence diagram and compare it tothe Test case, but this can become very difficult.

As the animated sequence diagram that reflects the actual executionof the Test case may contain hundreds of messages, it is simplyunrealistic to search for the behavioral pattern, that is, the requiredsequence of events that is defined in the Test case, and finding theseamong all the actual messages that were generated during the executionof the Test case is nearly impossible.

Another way of doing it is to let your Model Driven Testing solutiondo the comparison in real time, as the Test case executes. Figure 14 below shows a case wherethe expected frequency that the radio was supposed to be tuned on isnot the same as the actual frequency.

In fact, the Model Driven Testing solution can show both theexpected behavior (in this case, the radio should have tuned to 88MHzbut instead got tuned to 87.5MHz), and the color coded sequencediagrams shows very clearly the expected behavior and the actualbehavior, making the debugging of the design and/or the Test case veryeasy.

Figure14: Understanding a failing Test case

Test Case Behaviors as Code
As powerful as sequence diagrams are for describing Test scenarios(i.e. behaviors of Test cases), it is also useful to be able to capturea Test scenario simply as code. This code will describe the “pure”behavior of the Test case, and should be linked automatically with thecode that reflects the whole Test Architecture.

Let's look at an example. One of the requirements is that as thefrequency is moved up, it should wrap around when it gets to thehighest frequency in a given waveband. Let's assume this is a shortwave. A natural Test case here is one where the radio is being tuned tothe lowest frequency of the short wave band, which is 5950KHz, push itup while checking that each up() operation indeed sets the radio to theright frequency, and when the frequency gets to the highest one, whichis 15600KHz, the next up() operation sets the frequency to the lowestone, which is 5950KHz.

Figure15: Test case behavior captured as code

This type of Test case can easily be expressed in code, as shown in Figure 15 above. As you can see, thecode includes an assertion (as part of the Model Driven Testingenvironment) that can either pass of fail, depending on a checkcondition.

This mechanism allows the user to define many points where a verdictcan be determined. The smaller window on the top right corner shows theexecution results from this Test case – all the assertions that are inline 219 passed; these are the assertions that check that the requiredfrequency is indeed the same as the actual.

The last assertion, in line 225, is the final one, and this is thecheck to see that the frequency wraps around, back to 5950 Khz, andthis one passed too. If one of the assertions fails, it will benotified in this window as well, and it can be used to navigate all theway back to the location in the model where the test assertion has beendefined.

Test Case Behaviors as Flow Charts
The example above was simple enough to be captured as code, but if thelogic becomes more complex, it would be more natural to capture thebehavior of the Test case as a flow chart and let the MDD environmentconvert it to code.

Let's look at the following Test case. The Test Objective (a termfrom the UML Testing Profile) is the requirement shown in Figure 16 below.

Figure16: Test Objective, linking a test back to a Requirement

The Test case behavior that addresses the test objective in Figure16 is given in Figure 17 below ,and described as a Flow Chart. The logic is fairly simple ” save to allexisting presets and then restore and check.

Figure17: Top level flow chart describing a Test case

The two bottom action blocks are further decomposed intosub-flowcharts. “Save to all presets” is decomposed as shown on theleft part of Figure 18 below ,and “restore all presets” is decomposed as shown on the right part ofFigure 18. The logic for each of them is to simply go through all thewavebands.

Figure18. Subflow charts with the logic of storing and restoring all thepresets.

As can be seen from Figure 18 above ,the save and restore for each waveband is further decomposed into aseparate sub-flowchart. In Figure 19below , one can see the logic for saving presets (as defined in the first action in theflowchart in Figure 17 ). It is a simple iteration through thefive presets.

Figure19. Logic of storing presets described as a flow chart

In Figure 20 below , one cansee the logic for restoring the presets. You can also see the assertionthat is used to test the actual preset value is indeed the one that wassaved.

Figure20: Restoring and CHECKING the preset values

Converting Simulation Scripts intoModel Driven Test Cases
A Model Driven Testing solution always offers the ability to executeseveral Test cases in a batch mode, either interactively or using ascript. However, it assumes that Test cases already exist.

In this section, we will discuss a common real life scenario wheresimulation scripts are stretched to perform Testing, and how to convertthem into “real” model-driven Test cases.

Figure21. Textual script to drive testing

MDD solutions often offer scripting capabilities that allow forexecution of designs in some sort of batch mode, and many use thiscapability to perform unit and regression Testing.

An example of such a script is given in Figure 21 above . As one can see,this script generates an evOnOff() to turn the radio on, tunes thestation down to test the wrap around behavior of the tuner, then turnsit Off using evOnOff(), and finally turns it on again using evOnOff() “the expected station should be tuned to the same station as the firsttime the radio has been turned on.

Executing such a script often requires a person to physicallyobserve the design execution, and based on this “visual inspection”determines if the behavior of the design is right or wrong. In effect,it requires a human “inspector” to determine pass or fail — potentiallyallowing for errors to go unnoticed until very late in the process.

As MDD offers the ability to generate an animated Sequence Diagram,one can execute the script while at the same time generate such asequence diagram. In this case, executing the script in Figure 21 willautomatically generate the animated sequence diagram that is depictedin Figure 22 below .

Figure22: Automatically generated Animated Sequence Diagram to be convertedinto a Test case

At this point, the natural next step is to combine the script withthe output that is shown in Figure 22, effectively creating a full Testcase that includes the inputs (from the script) and the requiredoutputs (from the animated sequence diagram).

In fact, this will also allow users to fine tune the Test case byremoving some messages that are not critical to the effectiveness ofthe Test case. For instance, in this case, the timeout messages(tm(20)) can be removed and so can the down() messages.

Figure23: Test case combining a script with its Animated Sequence Diagram

The new Test case that integrates the script and the animatedsequence diagram is shown in Figure 23above . This Test case drives the design with inputs like thosein the script, but it also checks for all the other messages, each ofthem acting as a verdict criteria.

There is no need for visual inspection, and such a test easily canbe included in a battery of tests that are executed periodically (e.g.overnight), as part of regular regression tests.

Figure24: Interaction between legacy test scripts and SUT

Re-using Pre-Existing, Code-BasedTest Cases
In the previous section we discussed the use of scripts that are partof the MDD environment, but there are many cases where legacy testscripts cannot be thrown away and need to be integrated into the ModelDriven Testing environment. Here we will present an overall schemewhere these test scripts can be included and executed as part of aModel Driven Testing environment. Let's look at Figure 24, above.

In this case we assume an existing test suite based on Pythonscripts, but it can easily be applied to any other scripting language.The left life line represents the Python test script.

It specifies the inputs that are going to be sent to the SUT overtime, and it specifies the checks that will be applied to verify theoutputs of the SUT. In this scheme, the Python script calls operationson the Test Architecture (second life line) that contains the TestComponents (third life line) and the SUT (right life line).

The Test Architecture forwards the inputs to the Test Components,and from there to the SUT; conversely, the Test Components pass the SUToutputs to the Test Architecture, and then to the Python script.

The actual test behavior is played between the one (or more) TestComponents and the SUT. The code for the Test Components is fairlystraightforward, and therefore can be automated.

In this case, the input operation f0() is called all the way fromthe Python script to the SUT, and the SUT calls output operation f1()on the Test Component, and it notifies the Test Architecture, whichnotifies the Python script. As you can see, the operations can carryparameters in order to direct the calls to different port instances ofthe SUT.

In such a scheme, the Test Architecture serves as a proxy betweenthe Python scripting universe and the model level test environment(composed of Test Components and SUT). In effect, Python scripts areco-existing, or interacting with Model Driven Testing, thus allowingone to re-use the tests that are captured in these legacy scripts.

Why move to Model Driven Testing?
Now that we have reviewed the Model Driven Testing process, let'ssummarize the key capabilities required to support this process.

1) Ability to trace andeasily navigate between requirements, design artifacts, Testarchitectures, Test cases and test execution reports, all from within asingle browser.

2) Automatic generation of agraphical model of a Test Architecture.

3) Ability to automaticallygenerate executable code from the Test Architecture.

4) Ability to capture thebehavior of a Test case as plain code, hierarchical flow charts orsequence diagrams.

5) Ability to link Testcases, regardless of how they are captured, to their Test Architecture,and execute them.

6) Ability to re-useSequence Diagrams that capture requirements as Test cases.

7) Ability to parameterizedsequence diagrams for Testing purposes.

8) Ability to capturebehaviors for stubs as part of a Test case, and automatically generateintelligent code that reflects the behavior that is required from thestubs.

9) Ability to use Testassertions to check return values, attributes, etc, as part of theverdict.

10) Easy navigation from thegraphical representation of Test cases to their code representation.

11) Ability to execute Testcases, graphically monitor their progress and graphically identifycauses of failures.

12) Easy way to capture Testcases by running simulations and converting resulting animated sequencediagrams into Test cases.

13) Easy way to convertsimulation scripts into Test cases.

14) Ability to re-use andintegrate legacy Test scripts into a Model Driven Testing environment.

15) Ability to execute thesame Test cases on the host development platform and on the targetwithout modifying them.

The list is long, but the truth of the matter is that most of thecapabilities listed above have an equivalent offering that is part ofany Model Driven Development environment.

And due to the benefits gained from all these features, developersthat have migrated from a code-centric approach to a MDD process havemanaged to significantly increase their productivity. It's time fortesting people to increase their productivity by leveraging thesebenefits, too!

It should be no surprise that closing the productivity gap betweenour ability to produce large, complex designs and our ability (orrather lack thereof) to test them requires a transition from code-basedtesting to a Model Driven Testing process.

Fortunately, one can do this in steps; this is not an “all ornothing” kind of transition. However,be warned that not transitioning to a Model Driven Testing process issimply not an option .

To read Part 1, go to “Thebasics of the UML 2.0 software testing profile.”
To read Part 2, go to “Puttingthe UML Test Profile to Work.”

Moshe S. Cohen is Senior Director, Telelogic. He holds an EE andMaster's in Mathematics and Computer Sciences from Beer-ShevaUniversity, Israel. He has over 20 years of experience in hardware,software and system design. From the beginning of his career, Mr. Cohenhas been applying formal methods and modeling solutions to design andtesting to develop robust systems and software. In his currentposition, Mr. Cohen is actively involved in defining Telelogic'stesting strategy and solutions.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.