Transitioning from code-based to model-driven software testing: Part 1 - Embedded.com

Transitioning from code-based to model-driven software testing: Part 1

With the emergence of the UniversalModeling Language (UML) standard, software designers now havethe means to significantly improve their design productivity bytransitioning from a code-based development process to a Model Driven Development (MDD )process. Now, engineers and developers are able to deliver morecomplex, more intelligent designs in much less time when compared to acode-based development process.

But, software testing is another case altogether. thet testingprocess is still largely code-based, and never transitioned to amodel-driven process. As a result, this created a serious productivitygap between design and Testing, where complex and intelligent designscan be produced “faster” than our ability to test them. The only way toclose this gap is by applying a Model Driven Testing process.

An MDD environment handles and manages many design artifacts such asObject Model Diagrams, Statecharts, Operations and the relationshipsbetween them. In comparison, a Testing environment handles and managesTesting artifacts, such as Test cases, TestBenches, and test results.

These two environments are too often separate, as the former ismodel centric while the latter is code centric. For example, in an MDDprocess designers can navigate from a use case in the design to codethat implements it, while testers can navigate from this code to itsTest cases. But no one can navigate from the Test cases all the wayback to the use cases, to check coverage for all the use cases.

Figure1: Design productivity is dramatically higher than Testing productivity

While most of us describeP Test cases by writing scripts (e.g. ython is a common scripting languagefor Testing ), or simply in plain code, one needs to realize thatdeveloping Test cases is equivalent to developing code. But, today's”do more with less” reality, which rarely provides the resources fordeveloping, debugging and Testing Test cases, requires that Test casedevelopment productivity must be significantly improved.

Although there are many factors related to improving the Testingprocess, this article will concentrate on the single most significantcontribution to quality improvement: extending traditional code-centricTest case development into modeling Test Architectures and Test casebehaviors, and how this helps you deliver high quality Test cases withfewer resources.

In effect, this is similar to the evolution that occurred in thedevelopment process — transitioning from code-based to model baseddevelopment. But now we will apply this to the Testing process. Thiswill be done in the context of the UML 2.0 Testing Profile, where theTesting profile seamlessly integrates into UML, enhancing it withconcepts such as Test Architectures and Test Behaviors.

Test Architectures extend the existing UML 2.0 structural conceptsto include the structural aspects of a test context covering testcomponents, the system under test and the relationships between them.Similarly, the Test Behavior extends the already existing UML 2.0behavioral concepts to include the behavioral aspects of Testing,covering Test cases, test objectives and test verdicts.

In effect, the UML 2.0 Testing profile not only offers a taxonomyfor Testing artifacts, but it also does it in a way that extends andintegrates well with the UML, offering a single environment where bothdesign and Testing can co-exist.

Selecting the Right Test Case forthe Job
A Test case applies stimuli to the System Under Test (SUT), and thenassesses the SUT's responses to the stimuli to derive a verdict (suchas Fail, Pass or Inconclusive).

But in order to be able to execute a Test case, it has to execute ina certain context; this is what we often refer to as a TestBench, or aTestHarness. However, if a design is given as a model in an MDDenvironment, once the user specifies the scope of the SUT, the MDDenvironment can analyze the design and automatically produce agraphical model of the TestBench, known as a Test Architecture.

This Test Architecture can then be used as the input for a codegeneration process, where the TestBench code is automatically generatedfrom the graphical Test Architecture. Furthermore, one can compile theTest Architecture (or more specifically, the code that reflects theTest Architecture) and run it, but it will not execute any test; Inorder for it to perform any test, Test cases need to be defined.Using Traditional Code/Flow charts toCapture Behavior
As we will see below in Figure 2 below ,the Test case behavior can be described using code, but also as aFlowChart, or as a Sequence Diagram, all providing higher productivitythan traditional coding.

Using code to describe a Test case is essentially the same as thecurrent way of describing Test cases – coding – but the difference isthat as shown in Figure 2 below, the Test case needs to focus on thepure behavior of the Test case, just the stimuli and the expectedresults; the context in which the Test case executes has already beengenerated automatically.

For example, there is no need to instantiate the SUT classes or tocreate the Test components — the other classes that interact with theSUT and inherit from their respective design classes — This has beendone for us by automatically generating a graphical model of the TestArchitecture and generating code for it.

Figure2: Using code to describe the “pure” behavior of a Test case

Capturing Test cases behaviors in code and having it execute in anautomatically generated Test Architecture is the most immediate way toleverage Model Driven Testing, with minimal risk and practically nolearning curve. Another advantage to this approach is that it allowsfor easy re-use of existing code-based Test cases.

But as the logic of the Test case behavior is often non-trivial, wetend to sketch Test cases on the “back-of-a-napkin” or other throwawaymedias as an informal flow chart.

However, since mapping a flow chart to code is relativelystraightforward, Model Driven Testing environments do allow you tocapture a Test case behavior as a flow chart, generate Test code fromthis flow chart, link it to the Test Architecture and run the Test.

Describing Test cases as flow charts, as shown in Figure 3 below , has the sameexpressive power as coding, yet it is much easier to capture and tocommunicate to all of the project's stakeholders, those from theTesting team and to the developers who are in charge of developing theSUT — especially if it failed a Test.

Figure3: Test Case Behavior described as Flow Chart / Activity Diagrams

Describing Test Case Behavior withSequence Diagrams
Sequence diagrams offer a unique view of the design, one that is rarelyused within the context of code-based Testing.

These diagrams can be used to describe operational scenarios betweenthe overall system and the “actors” that interact with it, oftenreferred to as a Black-Box sequence diagram.

In other cases, they may include details about the sequencing andexchange of messages between internal design components, in which casewe will refer to them as White-Box sequence diagrams (some refer tothem as Gray-Box, which is a subject for another article altogether).

During System level analysis, designers will identify many highlevel requirements, and most of the behavioral ones will be describedas Sequence Diagrams.

This forms the basis for a process where the System Analysts(preferably) will create many variants of the basic requirements, likereplacing the order of some inputs, as well as “rainy day” permutationsof the basic requirements.

Otherwise, the Testing people who are (hopefully) domain expertswill have to do it. In effect, this process converts high-levelrequirements that are captured as Sequence Diagrams into concrete Testcases whose behaviors are captured using Sequence Diagrams as well.

In theory, one could “look” at a sequence diagram that describes aRequirement, and apply it interactively as a Test case. It requiresinjecting “inputs” to the SUT and checking the “outputs” to see if theymatch those that are defined in the sequence diagram.

As the Test case executes, one can generate an animated sequencediagrams showing the interaction between the System Under Test (SUT)and others actors. One may expect that animated sequence diagramsshould be “the same” as the Requirements sequence diagrams, but inreality, animated sequence diagrams are much more detailed.

In effect, the Requirements Sequence Diagrams should be a subset ofthe animated ones. Therefore, a simple comparison of Requirementssequence diagrams to the animated ones will not help in Testing whetherthe SUT meets the Requirements, as expressed in the RequirementsSequence Diagram.

For example, a system level sequence of inputs and expected outputsthat can be described using 10 or so messages may turn into a detailedanimated sequence diagram with hundreds of messages, and thereforetrying to compare the two – manually –  to check whether therequirements have been met, is completely unrealistic.

Figure4: Real Time comparison of Animated Sequence Diagram againstRequirements Sequence Diagram

One solution that is often offered by MDD solutions is to automateTest case executions. As shown in Figure 4 above, SUT inputs aregenerated by the MDD environment to drive the SUT, while observing itsoperation, and comparing it, in real time, to the interaction that isdefined in the Requirements.

This entails many things, such as the ordering of messages, inputsand outputs, timing, expected and actual values for parameters ofoperation calls ” all are compared in real time during the execution ofthe Test case to determine the verdict of the Test case execution.

Another important source of Test cases whose behaviors are describedas sequence diagrams is the actual execution of the design. As a designiteration is completed, and the development team is ready to start anew iteration, addressing another set of use cases (i.e. adding newfunctionality), it will be good to execute the design in its currentstage.

This allows Testers and developers to capture the actual interactionbetween design components as a sequence diagram. This is a very usefulsource of Test cases, often used for Regression Testing purposes. TheseTests are easily generated, effectively turning them into Requirements,for regression Testing purposes, making sure that new design iterationsdo not break prior functionality.

Next in Part 2: Putting the UML Test Profile to Work

Moshe S. Cohen is Senior Director, Telelogic. He holds an EE andMaster's in Mathematics and Computer Sciences from Beer-ShevaUniversity, Israel. He has over 20 years of experience in hardware,software and system design. From the beginning of his career, Mr. Cohenhas been applying formal methods and modeling solutions to design andtesting to develop robust systems and software. In his currentposition, Mr. Cohen is actively involved in defining Telelogic'stesting strategy and solutions.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.