DSM in home automation network design: Part 1 - Building a model-based language - Embedded.com

DSM in home automation network design: Part 1 – Building a model-based language

Editor’s Note: Excerpted from their book Domain Specific Modeling: Enabling Full Code Generation , the authors use a fictitious machine to machine-based home automation model to demonstrate the advantages and pitfalls of domain specific modeling (DSM) . Part 1 focuses on the initial steps in creating an application-specific home automation networking language.

In this series of articles, we will look at an example of a Domain-Specific Modeling (DSM) solution for an embedded system that works closely with low-level hardware concepts. It uses the experiences of a company which we will call Domatic. While the DSM solution itself succeeded in its aims, this case encountered a number of difficulties. Rather than only providing examples of the triumphal march of DSM, we hope that looking honestly at these problems will prove useful in helping you to avoid them. Names and minor details have been changed to protect the innocent.

Domatic worked as a co-manufacturer and solution provider, producing a variety of hardware and software products. The focus was on M2M (Machine-to-Machine) communication, applied to domains including energy, home automation, telecommunications, and transport.

Domatic wanted to investigate DSM, to see if and where it could be applied in their work. They were looking for higher levels of productivity through automating parts of the production of software and configuration information. As Domatic had no expe- rience of DSM, and indeed little of any kind of modeling, they engaged a consultant from a company experienced in DSM to help them perform a proof of concept. As an example domain they chose an existing home automation system. Although there were no plans to build a large range of new variant systems in that domain, a few variants already existed, so it seemed a good candidate domain for DSM.

Target environment and platform
The home automation system chosen for the proof of concept offered control of a range of devices including heating, air conditioning, lights, and security. The focus for the proof of concept was on a telecom module, which allowed remote control of the system over a phone line. In addition to remote control, the telecom module also allowed the remote update of its software, and commands from the main home automation system to dial out to a remote number to report alarms or log other data. The module had already been designed and built, and a few variants of it had been made as part of products for different clients.

The telecom module was operated remotely by a normal call from a phone. The module used voice menus to provide information and offer the user choices, which he could activate by pressing buttons on the phone keypad. The module used a standard telecom chipset to recognize the frequencies of the DTMF tones and translate them back into the simpler form of which button had been pressed.

The voice menus used real speech, sampled and stored in the module. As this was an embedded device, the speech was broken down into reusable sampled units of words or phrases to save memory. An actual sentence was played back as a sequence of these samples.

Clients supplied a sketch of the desired voice menu, for example in simple flow charts. These were fleshed out by Domatic into a spreadsheet format which added the technical details. For instance, sentences were broken down into sample units, and the choices were implemented as jumps to another row in the spreadsheet. Each row of the spreadsheet represented a certain memory address containing one primitive command: play a certain voice sample, jump to a certain memory address, assign a value to a register, and so on. Listing 1 shows the spreadsheet for a loop that reads out all five modes in the system, and tells the user which button to press for each.

Listing 1: Spreadsheet to read out the list of modes

Address    Command        Argument
00A1       Load A         00
00A3       Add A          01
00A5       Say            'For'
00AE       SayMode A
00AF       Say            'press'
00B8       SayNumber A
00B9       Test A <       05
00BB       IfNot
00BC       Jump           00A3

As the listing shows, the spreadsheet forms an assembly language program. An in-house assembler processed the spreadsheet into a binary file that implemented the program, running on an 8-bit microprocessor. As opposed to third-generation programming languages such as C or Java, assembly languages are specific to a given microprocessor, and sometimes also to a lesser extent to a given domain of use. This in-house assembly language included a variety of “Say” commands, which would play a sample. Most samples were specified simply by memory address index and length: the actual samples were burned to an EEPROM. For some frequently used samples, a specific shorter command could be used, for example, “SayNumber B” to play the sample corresponding to the value of register B: “one” for 1 and so on.

DSM Objectives. Unlike other cases, there were no clear objectives for a DSM solution in this domain. The main goal was to use this example to examine the applicability of DSM in low-level embedded software development in general. As Domatic produced solutions based on other companies’ requests, the actual domains varied with each new customer. An important goal was therefore the ability to quickly create a new DSM solution, including the modeling language, generator, and tool support.

Domatic used no specific method for software development. Their developers would sometimes draw simple flow charts or state diagrams, either before or after they wrote the code. Reuse of code from older projects followed the “industry standard” practice of simply copying whole code files and changing parts. Recognizing the problems inherent in this approach, Domatic hoped that DSM solutions would increase the consistency of their software development and the reusability of designs and code.

As their current development relied largely on ad hoc or post hoc documentation and testing, Domatic were also interested in the fact that DSM models were at a high enough level of abstraction to serve as a communication medium with clients. The models could serve as the formal requirements specification, and at the same time as internal design documentation. Through code generation, the models could also be immediately tested.

Starting the development process
The DSM solution was developed in MetaEdit+ 3.0 by a consultant from MetaCase and an expert developer from Domatic. The consultant supplied the DSM know-how and actually built the metamodel and generators. Domatic supplied the understanding of the domain and the required code, and also made an extension to their spreadsheet assembler. The development of the DSM solution set out to follow the process as a proof of concept. As we shall see, however, not all went according to plan.

Before the workshop and Day 1. Domatic had supplied the consultant with material about their domain and language three weeks before the workshop. The material covered the whole home automation domain, focusing on the telecom module. A week before the workshop they emphasized a particular description of the whole home automation system and how it interacted with its sensors, actuators, keypad, screen, data modem, and DTMF voice control.

The first day of the workshop was spent building up a shared picture of this wider domain, resulting in a modeling language containing concepts like sensors and actuators. By the end of the first day, it was apparent that this language was too generic. Just knowing that there is a sensor called “smoke detector” connected to the system, and an actuator called “fire alarm”, is not enough to generate meaningful code. The language would thus be useful for describing whole systems, and possibly for configuration, but not for demonstrating DSM with full code generation.

Day 2: If at first you don’t succeed… The second day of the workshop had the hard deadline of a meeting at 1 p.m. to present the results to management. The first part of the morning was spent establishing the area of the domain to be covered. Rather than focusing on the boundary, which is hard to lay down precisely at an early stage, the consultant and Domatic experts identified the central concepts of the domain. The modeling language had to be able to specify a section of voice output built up from text fragments, and a choice based on DTMF input. A small modeling language for this was built in MetaEdit+ in 25 minutes, 10:40–11:05.

Using this VoiceMenu modeling language, Domatic built a small example model and sketched the corresponding code. As both the modeling language and the assembly language were specific to the same narrow domain of Domatic’s telecom module, there was a good correspondence between model elements and lines or blocks of code. The consultant could thus build a basic code generator for the skeleton modeling language in 10–15 minutes.

In the remaining time up to 11:40, the modeling language was extended to handle more than the core cases: what to do when the user did not follow the voice instructions or when the voice output varied according to the state of the system. Concepts and control paths were added for timeouts and invalid input in DTMF, and for system calls to manipulate and test registers. Because of time constraints, the system calls were left free-form: the modeler had to know to use one of several possible assembler commands.

From 11:40 to around 12:00, the code generator was extended to handle the normal usage of these new concepts. The basic rules for how elements could be connected in the models were already specified along with the concepts, but there was no time for even the more obvious finer rules or checks. With an hour left until the meeting, there was time to finalize the example model, eat a hurried lunch, and prepare slides for the presentation to management.

Further development. After the meeting, there was a little time left to refactor the DSM solution. The modeling language was split into two diagram types, with the top level showing the voice menu and DTMF interactions. Each voice element there exploded to a lower-level diagram showing how it was built from static text fragments, varied by system calls. After the proof of concept workshop, the consultant finished this refactoring. He also added the missing parts of the code generator based on the sample code provided and sent the results to Domatic. These additions after the workshop took at total of two hours.
A modeling language for home automation
This particular DSMsolution was never taken into wider use at Domatic, and thus has notexperienced the normal evolution and rounding off of sharp corners. Someminor adjustments have been made to the version presented here toremove proprietary details. In this section, we will look at themodeling language and its metamodel.

Modeling concepts and rules. There were two modeling languages making up this DSM solution. TheVoiceMenu language described the high-level interaction from the pointof view of the caller. This language was thus useful not only forspecifying the hierarchical structure of the voice menus, but also fordiscussing this structure with the client, or for providingdocumentation to the end users. Each part of the model that specifiedspeech or system actions was further detailed in the lower-levelVoiceOutput language. This language took the place of the earlierassembly language statements written in the spreadsheet.

Figure 1 shows the definition of the VoiceMenu modeling language. The mainconcepts are the VoiceOutput, where the telecom module says something tothe caller, and the DTMF_Input, where the module waits for the callerto press touch tone buttons to make a choice. The normal flow is from asingleton Start object to a VoiceOutput, which gives instructions aboutpossible choices, to a DTMF_Input that waits for input from the caller.

Thetype of input expected is specified in a property of the DTMF_Inputobject as either “character” or “string” (for simplicity, we willconcentrate here on single character input). For each possible inputthere is a ConditionalFlow relationship to another VoiceOutput. Mostly,this will be a test for equality with a given character specified in theConditionalFlow, but slightly more complex conditions like >= couldalso be specified.

Figure 1: Top-level metamodel, VoiceMenu

Ifan invalid key is pressed there is an InvalidInput flow, normally backto the previous VoiceOutput, that is, the instructions for this choice.For cases where no input is received there is a Timeout flow, whichspecifies how long to wait before it is followed, again normally back tothe previous VoiceOutput. A VoiceOutput can also be directly followedby another VoiceOutput, allowing reuse at this level.

In additionto the rules specified here about how objects can be connected withvarious flows, there are also some more specific constraints. As usual, aStart object can be in just one From role, to prevent ambiguity in theinitial control flow. In a similar way, a DTMF_Input object may only bein one InvalidInput and one Timeout relationship: there would be no wayto choose between several. There will however normally be severalConditionalFlow relationships, as they each specify their ownConditions: the various keys that can be pressed.

EachVoiceOutput object, InvalidInput relationship, and Timeout relationshipspecifies in a lower-level diagram the actual speech it produces: thetext property in the elements themselves is a description for theconvenience of the modeler. The structure of explosions from thetop-level VoiceMenu to lower-level VoiceOutput diagrams is shown inFigure 2. Most often, the speech used for all InvalidInputs will be thesame, that is, there will be one “Invalid input” subdiagram, and eachInvalidInput relationship will explode to that same diagram. The sameapplies to Timeouts, but each VoiceOutput object will generally have itsown VoiceOutput subdiagram. As there is a limit to the complexity of ausable voice menu, no need was envisaged at this stage for an element ina VoiceOutput subdiagram exploding again to its own VoiceOutputsub-subdiagram.

Figure 2: VoiceMenu elements with a VoiceOutput subdiagram

The concepts of the lower-level VoiceOutput modeling language are shown in Figure 3 .The main elements of the language are the Text and SystemCall objecttypes. A Text represents a sequence of TextFragments played one afterthe other, with no variation. A SystemCall represents a sequence ofsystem commands: register assignments, special speech commands, and soon. The TextFragment and Command objects can only be used inside Textand SystemCall objects, not directly in the model itself.

Inorder to specify more complex flow control than simple sequential chainsof speech and system commands, the language provides conditionals jumpswith If and GotoPoint objects. An If object is made up of an Test suchas “A >=” and a Parameter containing the value to be compared with.The condition can be inverted through the Boolean property, Not.

Figure 3: Lower-level metamodel, VoiceOutput

Theelements in the diagram are mostly connected into a sequential chain ofFlow relationships. As the top row of Figure 3 shows, such a Flow cancome From several different kinds of objects and go To a slightlydifferent set. Start and Stop can only be in appropriate roles in such aflow, and If cannot be the source of a normal Flow relationship.Instead, If has two different relationships leaving it: True and False.If the result of the whole condition is true, control flow will jump tothe GotoPoint at the other end of the True relationship. If the resultis false, control flow will follow the False relationship to any of thenormal target objects, just as in a normal Flow.

The If constructis thus not a full if..then..else familiar from third-generationlanguages, but a simpler conditional jump, as is common in assemblylanguages. GotoPoint has no behavior of its own: when included in anormal flow sequence control simply passes on to the next element.Instead, it serves simply as a label, the target for an If jump.

Onceagain, there are the normal rules for Start: one instance per graph,and only one From role per instance. This time, a similar constraint onFrom and To roles applies to most of the object types: only GotoPointand Stop have no such restrictions. We can allow multiple Stops andmultiple incoming To roles for each; the metamodel already prevents Fromroles leaving it. An If object should have only one True and one Falserelationship leaving it.

Possible improvements
As thismodeling language was made in such a short time, and has not beendeveloped further, it is worth looking at some areas in which it couldbe improved. Some of the names for concepts could be fine-tuned, forexample, GotoPoint might be better as “Jump Target” or “Label,” and Textshould perhaps be “Speech.” These are however minor points, and easy toaddress at any stage—although it is worth noting that with some tools,changing the names of concepts can have catastrophic consequences: thenext time the model is loaded, all instances of those concepts maydisappear.

Perhaps the clearest problem isthe repetitiveness of the InvalidInput and Timeout relationships in theVoiceMenu models. If in most cases the same structure will be in amodel, but there may be some variation, it is better to assume thedefault and only use the structure where the model should behavedifferently. The norm for InvalidInput and Timeout is to return to theprevious VoiceOutput after saying a simple message, so that should betaken as the default if these relationships are not specified. Thiswould make the models faster to build, smaller, and easier to read, atno expense in terms of expressive power.

Another difficulty isspecifying a watertight constraint for the False relationships leavingIf objects. Most objects simply have one From role leaving them, but Ifcan have two, one for True and one for False. Constraining to two Fromroles does not solve the problem: they could both be either True orFalse. We can constrain If to have only one True relationship, but thecase of False is harder.

If we say an If can be in only one Falserelationship, we also exclude the possibility of an If followed by anIf: the second If takes part in two False relationships, one incomingand one outgoing. The simplest solution would be to make the leavingrole for the True case different by creating a new role type, forexample, JumpFrom. This would allow us to specify that each If can be inat most one JumpFrom role (for True), and at most one From role (forFalse).

Looking at the False relationship, however, there may bemore that we can do. The False relationship is actually no differentfrom a normal Flow. It would probably be better to have just a normalFlow relationship for it, and add If to the set of source objects at thetop of the figure.

The True case would be distinguished by adifferent role. Since GotoPoint can allow several incoming roles, weshould probably distinguish those in the normal sequential flow (To)from those that are jumps to this label (JumpTo). That would allow us toconstrain GotoPoint to a single incoming sequential role—correspondingto the single line of assembler that can precede the label—but manyincoming JumpTo roles corresponding to conditional jumps to the samelabel from multiple places in the code.

This would make the restof the constraints more similar and allow the use of inheritance. Ratherthan having to specify constraints for each object type separately, wecould give Text, SystemCall, If, and GotoPoint a common Abstractsupertype, and specify the supertype in the bindings and constraints.This way there would only be three constraints: any Object could be inat most one From role, Abstract could be in at most one To role, and Ifcould be in an most one JumpFrom role. The result of these changes wouldlook like Figure 4 , but we will stay with the original metamodel for the purposes of this article.

Figure 4: Alternative metamodel for VoiceOutput

Modeling notation
AsDomatic already used some simple flow chart symbols, the notation tookthese as its basis. Start and Stop were gray boxes containing theirname, and conditional points such as DTMF_Input and If were representedas diamonds.

In the top-level VoiceMenu diagrams, the schematicsymbol for a loudspeaker was used to represent all items containingspeech—VoiceOutput, InvalidInput, and Timeout. This was partly aconcession to time limitations: separating these symbols only by colormight be confusing, particularly since one is an object and two arerelationships.

In VoiceOutput diagrams, speech segments wererepresented by a cartoon speech bubble showing the sequence of words orphrases to be spoken. SystemCall sequences were shown with a traditionalflowchart symbol: a cut-off rectangle. The lines from If were labeledas TRUE and FALSE, with TRUE leading to a circle representing theGotoPoint.

Part 2: Using a model-generated language

Read more about DSM

Juha-Pekka Tolvanen has been involved in domain-specific languages, code generators and related tools since 1991. He works for MetaCase and has acted as a consultant world-wide for modeling language and codegenerator development. Juha-Pekka holds a Ph.D. in computer sciencefrom the University of Jyväskylä, Finland. He can be reached at .

Steven Kelly is chief technical officer of Metacase and cofounder of the DSM Forum. He is architect and lead developer of MetaEdit+, Metacase’s DSM tool.

Used with permission from Wiley-IEEE Computer Society Press, Copyright 2014, this article was excerpted from Domain-Specific Modeling: Enabling Full Code Generation , by Steven Kelly and Juha-Pekka Tolvanen.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.