Many embedded or distributed systems are composed of tasks that communicate with each other via messaging. The traditional approach to message handling–the switch and case statement we all know so well–is long overdue for a more modern solution. While this old procedural approach is straightforward, if you've got more than a nominal amount of messages, it produces code that's not modular and that's hard to read and test.
In this article I'll discuss how these problems can be corrected by using polymorphism. A polymorphic approach involves registering message handlers from derived classes in a base class map. In essence, this approach removes that mapping burden–the switch and case statements–from the developer. What remains is code with fewer errors that's easier to read and that has the modularity to be tested properly.
Message passing is common enough. Using Unix IPC or VxWorks message queues, such schemes are efficient, well-tested, and modular. What remains underdeveloped is the implementation of the message handling itself. The most common approach is the simplest one: a switch statement based on the ID of the message followed by case statements for every message a task might handle. While this works for a few messages, this approach breaks down in terms of code readability, testability, and extensibility when more than a nominal amount of messages are communicating at once. Fortunately, by drawing on the power of polymorphism, these problems can be corrected.
The procedural approach
Listing 1 shows an example of what most message-handling code might look like. Given an incoming message, the code has a case statement for every message ID the task can handle.
This scheme has several problems. For a few messages, this approach is readable, but there's a lack of extensibility and modularization here. By using a polymorphic approach to intertask message handling, code size can be reduced, code can more easily be tested, and modularization, readability, and extensibility will increase.
The polymorphic approach
Although the polymorphic approach is language agnostic, all of the following examples are done in C++. And although polymorphism is used, an object-oriented language isn't required either. While it looks prettier in a truly object-oriented language like C++ or Java, most of the implementation details are encapsulated in one base class module. I've implemented this approach in C.
Polymorphic message handling requires a Task class that is the basis for all other tasks in the system. For the sake of this example, I've defined a base Msg struct that represents a message passed between tasks. The only public method for the Task base class is a Run method that acts as the main execution loop for the task, as shown in Listing 2.
The key to this message handling approach is to create a mapping between the message ID to be handled and the handling method. For any derived task, define some typedef shortcuts, a Register method, and a DefaultHandler method, as shown in Listing 3
The MsgHandler member function pointer will act as the prototype for any method that handles an incoming message. Any derived class creates its own message handler in the prototype of the MsgHandler function. When a derived class wishes to handle a message, Register is simply called. The Register method maps the message ID to a message handler method. The DefaultHandler method is the last resort for handling messages. This method is used when no mapping from the incoming message ID to a member handler can be found. It's vital here to allow a subclass to override the default handling. Finally, the Task class will need some private typedefs, one method, and the message map itself, shown in Listing 4.
The message map is an STL map, relating a message ID to a message handler member function. The HandleMessage method is a two-line method that finds the appropriate handler for a message and calls it.
Base class implementation The implementation of the base Task is very simple. Most methods involve only one or two lines. The Register method, shown in Listing 5, simply inserts a new mapping into the Task's internal message map.
HandleMessage is the simple but powerful crux to this approach. A look-up is done for the incoming message ID in the message map. If an appropriate handler is found, it's called with the message, shown in Listing 6. Otherwise, a default handler is called.
What makes this so powerful is polymorphism. All classes derived from this Task class will simply be able to register their handler with a message ID while all of the handling logic is implemented in this one place. What is more remarkable is that this takes the place of multiple switch/cases in each task, eliminating unreadable, difficult to debug, and sometimes redundant code. For the purposes of this example the default handler is simplistic– merely reporting to the user that an incoming message could not be handled, shown in Listing 7.
The Run method might look something like Listing 8's pseudo-code. This Run method follows a common approach to task execution. All that is necessary for a particular implementation is that a message is obtained from an input source (such as message queue) and that HandleMessage is called with that message.
A derived task To illustrate this approach to message handling, we'll need to create another task derived from the base Task class. For this example, shown in Listing 9, the only necessary methods are a Constructor and some message handlers.
In the Constructor, the derived task will register its handlers for any messages it wants to handle, as Listing 10 demonstrates.
Listing 11 shows two example message handlers. They simply print out some of the example fields in the message.
Finally the default handler reports that the message ID was not recognized, as shown in Listing 12.
This polymorphic approach to intertask message handling offers several advantages over the traditional approach. You can expect increased readability, extensibility, and performance.
More readable code Using the polymorphic approach detailed here, code readability increases. The long list of case statements to search is gone. This aspect of the old procedural method quickly leads you to either become lost in your own code or unable to quickly identify where a message is handled without searching–something the polymorphic approach allows at a glance.
Extendable code In much the same way as traditional switch/case approach, the polymorphic approach enables you to add additional messages to be handled simply by adding a new case statement, or in the polymorphic approach, by adding a Register() method call. The difference with the polymorphic approach is two-fold. First, you could easily devise an Unregister() method to remove a mapping, something that's not possible with the traditional method. Secondly, Register()-ing and Unregister()-ing doesn't have to happen just once at run time. You could register and unregister for messages dynamically through a task's execution cycle. This extensibility makes the polymorphic approach all the more powerful.
Improved system performance Not only is the polymorphic approach more readable and extensible, but the performance is better, too. The algorithmic complexity for the traditional case is, on average, linear (O(n)) because the task must check each case until it finds one that matches the message ID it's looking for. This is especially bad when the task searches for a message ID and there is no case for; it will always have to go through the entire list of cases. With the polymorphic approach the performance improves. Using the out-of-the-box C++ STL map, the  operator has a logarithmic complexity (O(log N))1 hashing function to implement the mapping and approach a constant algorithmic complexity (O(1))!2
When developers come across a new technique to replace a traditional approach, we usually discover some tradeoff in system performance, code readability, or system complexity. With this polymorphic approach to intertask message handling, you attain all of the aforementioned benefits without any of the usual drawbacks. Not only is the polymorphic approach more readable and extensible, but the performance is better, too. Because messaging is quite common in distributed systems and performance and code size are such important factors in embedded systems, the applicability of this approach to those environments is quite evident. Any system using intertask messaging, however, can improve code readability, testing capabilities, extensibility, and performance with this approach.
The source code for the framework and examples used in this paper is available at www.embedded.com/code.
Kevin Duffy is an embedded software engineer with Northrop Grumman where he works on operating systems, middleware, and system infrastructure. He can be reached at .
2. Morris, John. “Data Structures and Algorithms: Hash Tables,” University'of'Auckland Department'of'Computer'Science, 1998. Retrieved November 2005: www.cs.auckland.ac.nz/software/AlgAnim/hash_tables.htmlBack
This is a very good approach. I can appreciate its uses because I have implemented a similar approach myself.
However, my argument goes on the performance. I am still doubtful this method will give “improved” performance in case we are defining the messages to be in contiguous range of id's.
The compiler just compiles it into a lookup table or more so a jmp table.So, I guess performance more or less reamins the same for these cases.Nevertheless, the above mentioned approach is CLEAN AND COMPATIBLE to almost any implementation easily!
Thanks for bringing this up to the embedded community!
– Saravanan T S