DSM in home automation network design: Part 2 – Using a model-generated language - Embedded.com

DSM in home automation network design: Part 2 – Using a model-generated language

Editor’s Note: Excerpted from their book Domain Specific Modeling: Enabling Full Code Generation , the authors use a fictitious machine to machine-based home automation application to demonstrate the advantages and pitfalls of domain specific modeling. Part 2 focuses on using the model and the language to generate the code needed for applications in such an environment.

As detailed in Part 1 in this series, for each home automation system type you have created there would normally be one set of diagrams specifying how the user could control it over the phone. At the top level would be a VoiceMenu diagram, and each VoiceOutput object in that could be exploded to its own VoiceOutput diagram.

An example model
In our example application in Figure 5 , the telecom module responds to the call with the main menu: an initial welcome message and list of the options. To keep things simple, here there are only two options: pressing 1 takes the user to the mode menu and pressing 2 to the version info.

The version info is simple: it reads out the version info and waits for the user to press 1 to return to the main menu. The InvalidInput and Timeout relationships are even simpler: each simply says “Invalid input!” or “Timeout!” and returns to the previous menu as shown.

The mode menu is more complex: it reads out the current mode and a list of all modes, telling the user which key to press for each. The DTMF_Input for the mode menu allows the user to press a key corresponding to a mode. As there are five modes, the key must be from 1 to 5 (setting the mode to 0 does nothing). If legal input is received, the “Set mode” object’s subdiagram uses a SystemCall to change the system mode to match the key which was pressed.

Figure 5: Sample VoiceMenu model

The details of the mode menu are described in the VoiceOutput subdiagram in Figure 6 . After initially stating the current mode and telling the user to select another mode, the application initializes a counter variable, register A, to zero. After the GotoPoint, A is incremented and we move to the bottom row of the diagram, heading left. The number of modes, five, and their names are built into the system, so the system can say the name of the first mode and “press 1.” In the If object we check that A has not yet reached the value of the last mode, five, and if so we jump to repeat for the next mode from the GotoPoint. After the fifth mode has been read and the test in If fails, we exit via Stop back to the VoiceMenu diagram, where we wait for the user to press a key corresponding to a mode.

Figure 6: Sample VoiceOutput model for mode menu


Home automation language use scenarios

As we mentioned above, the two modeling languages aimed to provide a natural way to describe and specify the desired behavior of the voice menu, and of the voice elements and system calls that made up each segment of speech. The VoiceMenu language was specific to the domain of voice menus, which was a common basis shared by Domatic’s developers, clients, and clients’ end-users. It could thus easily be used as a medium of conversation between all these stakeholders, and allowed specification of systems at a high level of abstraction.

A likely use scenario would have been for a Domatic employee to work with a client to design the voice menu, directly using the VoiceMenu modeling language in the DSM tool. If example texts were specified in the top-level elements, a slight modification of the generator would allow working prototypes to be built and tested immediately.

The VoiceOutput language was based on the domain-specific side of the in-house assembly language, which was in turn based on the features offered by Domatic’s hardware platform.

This modeling language was designed for use by Domatic’s developers, although simpler cases could be handled easily by nontechnical personnel. In the current state of the language, the direct inclusion of assembly language commands would have made using the whole language on more complex cases too complicated for anyone unfamiliar with the assembly language.
Looking at more examples of the usage of that language would probably have allowed the use of higher-level constructs, for example, to replace the three steps in the above model— Load A 00, Add A 01, and A < 05—with a simpler single “For A = 1 TO 5” construct. As things stood, the modeling languages would allow the creation of the complete range of applications that existed for that framework. Speech elements could be reused across multiple models, keeping memory requirements down in the finished product. As part of this reuse, it would be useful to know the total set of speech fragments used in a given application. This could be produced by a generator, guaranteeing that the set of samples for a product included all of those that were needed, and only those. Often with reusable components, it is also useful to create an explicit library of reusable components. This helps prevent developers inadvertently reinventing the wheel because they did not know of the existence of a previously made component. This could be accomplished with a simple little modeling language that would contain a set of TextFragments. When developers wanted a text fragment, they would pick it up from the library, adding it there if nothing suitable existed. An example of such a library is shown in Table 1. Here we have included the start address, to fit with Domatic’s practice.

Table 1: Library of Text Fragments

In actual use, it would probably be better to omit this and instead have the code generator (Figure 7 ) automatically create sequential numbering separately for each product. A useful addition would be a property for each fragment that pointed to the actual sound file, so developers could listen to it directly from the model and even emulate whole sections of speech.

Building the code generator
The generator produced the necessary code for the whole application in the assembly language that Domatic used. As the generator was based on existing best-practice code, the output was virtually indistinguishable from handwritten applications.

One concession was made to the time constraints: it would have been hard to generate the correct absolute memory addresses for jumps, as this would have required calculating the byte length of each assembly instruction. Instead, labels were generated as part of the output, and jumps were directed to the labels. A quick change to the assembler made these jumps function properly.

The generator was divided between the two modeling languages in the obvious way. Figure 7 shows the parts of the generator and the calls between them.

Figure 7: Home automation generator structure

At the top level, a VoiceMenu generator started off the generation forthe top-level VoiceMenu diagram, iterating over each VoiceOutput objectand each _DTMF_Input object. The handling of DTMF input, invalid inputand timeouts was all generated at this level. For the sample VoiceMenufrom Figure 5, the code output for the first VoiceOutput and DTMF_Inputis shown in Listing 2 .

Listing 2: Generator output for first VoiceOutput in sample VoiceMenu

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.