Using model-driven development for agile embedded apps: Part 3 - Action models & model translation - Embedded.com

Using model-driven development for agile embedded apps: Part 3 – Action models & model translation

As used in model driven design, actions are primitive statements such as augmenting a variable, creating aninstance, or sending a message. Actions show up in three primary places in a UMLmode: as actions on a state machine, as actions in an activity diagram, or asstatements in a method body.

The OMG specifies action semantics— that is, thekinds of things that an action language must be able to state—but does notspecify an action language. An action language meets the required actionsemantics but also provides a syntax.

What most people mean when they use theterm action language is abstract action language—a language that is not meant tobe directly compiled into executable object code but rather is intended to betranslated into a concrete action language for compilation.

As it happens, any third generation language (3GL) such as C, C++, Java, or Ada can be used to expressthe action semantics for a UML model. The OMG does not currently define anaction language, although there is some work under way in this regard, butnothing is expected to emerge until late 2009 or possibly later.

The vast majority of modelers use, and prefer to use, the same concretelanguage in their models as they plan to use for the implementation. However,there are some advantages to using an abstract action language. If you plan toimplement the model in multiple concrete languages, or need to allow for thepossibility of reimplementing the model in a different concrete language, thenan abstract action language provides some real benefit.

Abstract action languages are not without their drawbacks. First, for rightnow, they are vendor-specific and will be until and unless the OMG releases astandard. That ties your models to a specific tool even though UML models may beexchanged if the tools adhere to the XMI standard (in the upcoming section“XMI”). Second, many developers don’t want to learn another language at the samelevel of abstraction as the implementation language in which they are alreadyexperts.

An even more serious issue is the difficulty in debugging. If youdiscover a problem in the PSI, you can’t change the PSI and import those changesinto the model, because the actions are specified in a different language. Youneed to fix the abstract action language and then forward-engineer to test thecode. There may not be an obvious mapping between the abstract action languageand the implementation language, so you will have to think about how to cast theproblem in the action language to produce the design implementation languageresult.

Since the two languages are at the same level of abstraction, this maybe more trouble than it’s worth. Last, the most serious complaint is that unlessthe action language includes a decompiler (translator from the implementationlanguage to the action language), you can never touch the PSI without breakingthe connection between the model and the code. While forward engineering(generation of code from models) is the primary workflow that must be supported,it seems draconian, as well as unnecessary, to completely disallow the reverseworkflow.

As I have mentioned, most developers using MDA today use the targetimplementation language as a concrete action language in the model. This doesmean that if the model must be retargeted toward a different implementationlanguage there is some work to do, but using a concrete action language easesthe debugging while simultaneously enabling reverse engineering when necessary.

Model Transformation

With all these models around, how does one ensure consistency? MDA approachesthis problem through model transformation. Models may be transformed manually,to be sure, but emphasis is placed on automating these transformations as muchas possible. Primarily models are forward-transformed (CIM to PIM to PSM toPSI), but sometimes backward transformations are performed as well.

There are many ways these model transformations can be done. The most usefulare by metamodel mapping, by marking and transforming the model, and byelaborating the model with design patterns.

Metamodel Transformations

Metamodel transformations are done by creating PIM- and PSM-specificmetamodels and transforming a metaclass from one into the metaclass for theother. The basic idea is shown in Figure 2.8. The PIM is captured in adomain-specific modeling language (most likely a UML profile) that containsdomain concepts for the platform-independent application semantics. The PSM isdefined using a different metamodel (also likely a UML profile). The mappingrules identify which metamodel elements in the PSM are created from whichmetamodel elements in the PIM.

Figure 2.8 Metamodel transformations

The disadvantage of this is the work involved in creating two differentmetamodels and the mapping rules that define the transformation between them, aswell as constructing the translator. The advantage of the approach is that oncethis work is done, many metamodels can be transformed easily. This approachis not used as often as the other approaches discussed here.

Another approach is to mark the one model with “design hints” and use atranslator that uses the design hints to create the PSM by applyingtransformational rules to the marked elements (see Figure 2.9). This is verysimilar to the previous metamodel transformational approach discussed but isless work to implement. In this approach, marks are added to the source model(usually the PIM).

These marks are almost always either stereotypes (such as the«trace» and «active» stereotypes mentioned previously), tags (user-definedname-value pairs), or constraints (user-defined “well-formedness” rules). Thesethree elements form the lightweight extension mechanisms in the UML used todefine profiles, a topic that will be discussed soon.

For example, some classes in the PIM might be marked as «distributed» and thetranslator might generate CORBA or DDS (Data Distribution Service) interfacedescription language (IDL) for the marked elements. Or an association on a classmight be marked «IPC» to indicate interprocess communications. 

Figure 2.9 MDA model transformations

The third, and most common, approach for generating the PIM is through themanual or semiautomated application of design patterns (see Figure 2.10). Adesign pattern is a generalized solution to a commonly recurring problem;that is, to be a design pattern, it must be generalizable and must stillmake sense when the specifics of its application are removed. It must alsoaddress a concern that reappears in a variety of contexts; that is, it must bereusable. 


Figure 2.10 Design pattern modeltransformations

Another useful definition for a design pattern is that it is a parameterizedcollaboration. It is a set of collaborating object roles, some of which are theformal parameters of the pattern. These parameters are object roles typed byclasses that will be replaced by classes from the PIM. The process ofsubstituting the actual parameters (classes from the PIM) for the formalparameters (classes in the pattern) is called pattern instantiation.

A design pattern has a number of important aspects. The problem is astatement of the goals of the design pattern, specifically: What design problemdoes the pattern address? The applicability defines the environmental oroperational pattern because ultimately these decide whether the pattern is anappropriate choice.

The consequences are a set of benefits and costs of usingthe pattern. Because design (and design patterns) is all about optimality,whenever you optimize one aspect, you de-optimize some other. The consequencesenable the developer to make good pattern selections.

The application of design patterns can be automated by tools. For example,the Rhapsody tool (from IBM Rational) applies a number of design patternsautomatically (although this can be configured). Some of the design patternsRhapsody automatically implements include the following:

  • Container-Iterator Pattern
  • Event Queue Pattern
  • Guarded Call Pattern
  • State Pattern
  • CORBA Broker Pattern (supports CORBA, COM [Component Object Model] , DCOM [Distributed Component Object Model] )
  • Data Bus Pattern (supports DDS)

For example, for the classes in Figure 2.11 below, code can be generated that usesbuilt-in or Standard Template Library (STL) containers and iterators to managethe collection. This is done automatically by Rhapsody and serves as a useful,but simple, example of design pattern automation. 


Figure 2.11 Container-Iterator Pattern example

The header file for theWaveform class is shown in Listing 2.1 , below , and the implementation file is shown inListing 2.2 below. You can see that OMCollection andOMIterator are added to manage the collection ofDataElements.


Listing 2.1 : Waveform.h

#ifndefWaveform_H #de ne Waveform_H

//##auto_generated #include //## auto_generated #include //## link itsDataElement class DataElement;

//## package pattern

//## class Waveform class Waveform {//// Constructors and destructors ////

public :

//##auto_generated
Waveform();

//##auto_generated
~Waveform();

////Additional operations ////

//##auto_generated OMIterator getItsDataElement() const;

//##auto_generated void addItsDataElement(DataElement* p_DataElement);

//##auto_generated void removeItsDataElement(DataElement* p_DataElement);

//##auto_generated
void clearItsDataElement();

protected :

//##auto_generated
void cleanUpRelations();

////Relations and components ////

OMCollectionitsDataElement; //## link itsDataElement

////Framework operations ////

public :

//##auto_generated
void _addItsDataElement(DataElement* p_DataElement);

//##auto_generated
void _removeItsDataElement(DataElement* p_DataElement);

//##auto_generated
void _clearItsDataElement();

}; #endif


Listing 2:  Waveform.cpp

//##auto_generated #include “Waveform.h” //## link itsDataElement#include “DataElement.h” //## package pattern

//##class Waveform Waveform::Waveform() { }

Waveform::~Waveform(){ cleanUpRelations(); }

OMIteratorWaveform::getItsDataElement() const { OMIteratoriter(itsDataElement); return iter;

}

voidWaveform::addItsDataElement(DataElement* p_DataElement) { if(p_DataElement !=NULL) { p_DataElement->_setItsWaveform(this); }_addItsDataElement(p_DataElement); }

voidWaveform::removeItsDataElement(DataElement* p_DataElement) { if(p_DataElement!= NULL) { p_DataElement->__setItsWaveform(NULL); }_removeItsDataElement(p_DataElement); }

voidWaveform::clearItsDataElement() { OMIteratoriter(itsDataElement); while (*iter){

(*iter)->_clearItsWaveform();

iter++;
}
_clearItsDataElement();

}

void Waveform::cleanUpRelations() {

{
OMIterator iter(itsDataElement);
while (*iter){

Waveform*p_Waveform = (*iter)->getItsWaveform(); if(p_Waveform != NULL) {(*iter)->__setItsWaveform(NULL); }

iter++;
}
itsDataElement.removeAll();

} }

voidWaveform::_addItsDataElement(DataElement* p_DataElement) {itsDataElement.add(p_DataElement); }

voidWaveform::_removeItsDataElement(DataElement* p_DataElement) {itsDataElement.remove(p_DataElement); }

voidWaveform::_clearItsDataElement() { itsDataElement.removeAll(); }


It should be noted that sometimes some models may be only implicitly created.For example, Rhapsody can generate PSI directly from a PIM through theapplication of design patterns. During this transformation (called “codegeneration”), Rhapsody internally generates the PSM (which it calls the“simplified model”) and then generates code from the PSM. Normally, this PSM isnot exposed to the modeler, but it can be, if the developer wants to see, store,or manipulate it. For example, in the previous code samples, the PSM is notexplicitly exposed.

It is also common to manually elaborate the PIM into the PSM by adding designpatterns by hand. This can be done because some patterns are difficult toautomate, for example. In general, I recommend a combination of both automatedand manual transformations to create the PSM.

Common Model Transformations

A number of specific model transformations are commonly done as developmentwork progresses. The MDA specification and usage focus on the PIM-to-PSMtranslation, but there are several more as well.

CIM to PIM

This transformation is so common that it is a basic part of the developmentprocess. The CIM captures and organizes the requirements into use cases andrelated forms (e.g., sequence diagrams, state machines, activity diagrams, andconstraints). The analysis model, or PIM, identifies the essential structuralelements and relations necessary to perform the semantic computations of theapplication. This is commonly called “realizing the use case.”

This transformation is problematic to automatically transform, so virtuallyeveryone does it through an object discovery procedure. Development processesdiffer in how this procedure is performed. In the Harmony/ESW process, atechnique called object identification strategies is used to ferret out theessential elements of the PIM. Continual execution is also used to ensure thatwe’ve done a good job.

PIM to PIM

PIM-to-PIM mappings are usually ones of refinement in which more details areadded to a more general PIM. This most commonly occurs when it is desirable tocreate a family of PSMs that have common design properties; in that case, thecommon design properties can be put in a refined PIM even though they properlybelong in a PSM. For example, a PIM might include a common subsystem orconcurrency architecture structure, even though the details of how thesubsystems are deployed and how the concurrency units are implemented may beomitted from the refined PIM.

PIM to PSM

This is probably the most common focus in MDA. This is traditionally themapping from the essential analysis model to the platform-specific design model.In the Harmony/ESW process, this is done at the three levels of design but MDAfocuses mostly on the architectural views. To this end, Harmony/ESW defines fivekey views of architecture that organize and orchestrate the elements of the PIMwithin the PSM.

PSM to PIM

This “backward” mapping is largely a matter of mining an existing design foressential elements or stripping out platform, technology, and design patterns touncover the essential model hiding within the PSM. This transformation is usefulwhen moving from an existing system design to a family of products. It is oftenused in conjunction with the PSI-to-PSM (reverse-engineering) transformation toidentify the essential elements hidden within an existing code base. This can bean essential part of refactoring a model when the PIM doesn’t exist.

PSM to PSI

Other names for this transformation are “code generation” and “modelcompiling.” A number of modeling tools can generate code from the model. Toolscan be classified into three primary types in this regard. Nongenerative tools don’t generate any code from the model. Such tools are commonly known as“drawing tools.” The second category of tools generates code frames ; these toolsgenerate classes and class features (e.g., attributes and empty operationsfor the developer to elaborate) but don’t deal with state machines oractivity diagrams. The final category contains so-called behavioral tools andgenerates code from the behavioral specifications (i.e., state machines andactivity diagrams) as well structural elements. This last category of toolstends to be the most capable but also the most expensive.

Within the generative tools, a number of different techniques are employedfor generating the code. Two primary kinds are rule-based tools andproperty-based tools. Rule-based tools use a set of rules, captured in somehuman-readable (and proprietary) form, to model the code generation. Atranslator (compiler) applies these rules to construct the output code.

Forexample, one rule might be that when an association end is encountered in aclass, it is implemented as a pointer whose name is the role end label.Property-based tools are model compilers that provide a set of user-definedproperties to tune the code generation. These tools are usually somewhat lessflexible than rule-based tools but are usually far easier to use.

PIM to PSI

It is possible that the model translator skips the step of producing the PSMand directly outputs the PSM. It is common that internally this translationtakes place in multiple phases, and one of these phases constructs an interimPSM but the intermediate PSM is thrown away.

This is a viable strategy forproduction of the final design, but it is also extremely useful for testing anddebugging the PIM before design is even begun. This is a common way to work withRhapsody, for example. The code generators in Rhapsody have built-in designrules out of the box and so partially complete PIMs may be executed bygenerating the code, compiling it, and running the generated executable.

In thiscase, the default design decisions are not really exposed to the user and theyare, in some sense, the simplest design choices. These decisions work fine forthe purpose of testing and debugging the PSM and are usually changed andelaborated during the design phase.

PSI to PSM

The PSI-to-PSM model transformation can be done in two primary ways: reverseengineering and round-trip engineering. Reverse engineering occurs when youconstruct a PSM model from a code base for which no model exists. Round-tripengineering occurs when minor modifications are made to the code generated froma model. Reverse engineering is normally done incrementally, a piece at a time,but still only once for a given code base. Round-trip engineering is done morefrequently as developers modify the generated code.

Reverse engineering is extraordinarily helpful, but it is not withoutproblems. The first problem is that many times the developers are not satisfiedwith the generated model because, for perhaps the very first time, they candirectly see the design.

Often, that design isn’t very good; hence theconsternation. In addition, some aspects of the model may not be easy togenerate. For example, while it is rather easy to generate the structural modelfrom the code, few
reverse-engineering tools identify and construct statemachines from the underlying code. It is a difficult problem, in general,because there are so many different ways that state machines may be implemented.

Another problem can arise if the code generated forward from the constructedmodel doesn’t match the original code because of differences in the translationrules. To be correct, the original and subsequently generated code must befunctionally equivalent, but there is often a desire to have them look the sameas well. Some reverse-engineering tools fare better than others on this score.

In general, I recommend an incremental approach to reverse-engineering acode base:

1. First, in code, identify and separate the large-scale architectural piecesof the system.
2. Reverse-engineer one architectural element:
    a .Integrate that element back with the remaining code, including theforward-engineered code from the reverse-engineered model.
    b. Validatethe resulting code.
3. Reverse-engineer the next architectural element.

The basic practice of agile methods is to develop in small incremental stepsand validate that the resulting code works before going on. This practice isvery successful with reverse-engineering code bases.

Round-trip engineering is a much simpler problem because the model alreadyexists. Most code generators mark places where developers are allowed or notallowed to make changes in the code, and the code generators insert markersto facilitate the round-trip engineering.

The advantage of this approach is thatthe developer can work in the code when that is appropriate without breaking theconnection between the model and the code. Some tools allow only forward codegeneration and do not support round-trip engineering.

Although I agree thatprimarily code should be forward-generated, I believe that completelydisallowing modifications to the source code is a draconian measure bestavoided.

Next in Part 4: How MDA enhances agile design.
To read Part 1 , go to “Why model and why MDA?
To read Part 2, go to “Key concepts of MDA.”

Used with the permission of the publisher, Addison-Wesley, an imprint of PearsonHigher Education, this series of three articles is based on material from “RealTime Agility” by Bruce Powel Douglass .

Bruce Powell Douglass has worked as a software developerin real-time systems for over 25 years and is a well-known speaker, author, andconsultant in the area of real-time embedded systems. He is on the AdvisoryBoard of the Embedded Systems Conference where he has taught courses in softwareestimation and scheduling, project management, object-oriented analysis anddesign, communications protocols, finite state machines, design patterns, andsafety-critical systems design. He develops and teaches courses and consults inreal-time object-oriented analysis and design and project management and hasdone so for many years. He has authored articles for a many journals andperiodicals, especially in the real-time domain.

He is the Chief Evangelist forRational/IBM , a leadingproducer of tools for software and systems development. Bruce worked withvarious UML partners on the specification of the UM, both versions 1 and 2. Heis a former co-chairs of the Object Management Group's Real-Time Analysis andDesign Working Group. He is the author of several other books on software,including Doing Hard Time: Developing Real-Time Systems with UML, Objects,Frameworks and Patterns (Addison-Wesley, 1999), Real-Time Design Patterns:Robust Scalable Architecture for Real-Time Systems (Addison-Wesley, 2002),Real-Time UML 3rd Edition: Advances in the UML for Real-Time Systems(Addison-Wesley, 2004), Real-Time UML Workshop for Embedded Systems (ElsevierPress, 2006) and several others, including a short textbook on tabletennis.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.