A real-time operating system relieves the embedded systems developer of the need to implement solutions to such problems as interprocess communication and memory management. Nothing could be simpler–especially for those accustomed to building and debugging such services anew for each application–than making a few well-defined operating system calls.
While the operating system may provide the services a developer needs, it doesn't necessarily do so in a manner that optimizes the developer's time over the life of the project. By hiding the handling of its services, the operating system also hides details that can be important to the developer, especially (but not exclusively) during debugging. Furthermore, the operating system services are of necessity framed in fairly general ways and as a result don't closely match the needs of any particular application.
Does this mean that we would advise against the use of real-time operating systems? Far from it! We've found, however, that the value of the operating system can be significantly enhanced when the developer encloses it in a shell, or membrane, that tailors its general-purpose services to the needs of the application.
In this article we'll discuss a number of related strategies that have been developed at our company to more effectively use real-time operating systems in a variety of embedded applications. Our products are communication systems encompassing a full range of services, from user interfaces through network protocols down to the level of controlling specific transmission media including wireline, HF radio, and satellite communications.Thus, the techniques we discuss have been proven in a number of different environments.
We'll focus on three main areas: interprocess communication through mailed data, memory management, and debugging. The techniques we present can simplify application development and debugging. They also tend to be robustly adaptable to the inevitable changes in requirements that occur during the life of the application. After discussing the techniques themselves, we'll discuss our experience with them. This experience includes porting application code between different processors and different operating systems and use of the same service membranes by programs written in different languages.
A tale of two operating systems
Our company has worked extensively with two operating systems for the 80×86 family of processors: Intel's iRMX and Ready Systems' VRTX. At the appropriate level of abstraction, these operating systems provide the same critical services. However, the methods of implementation are quite different.
For memory management, iRMX provides a single heap for all dynamically allocated memory segments. The application need only specify the size of the desired segment. VRTX, by contrast, provides pools of fixed-size segments, called partitions . The partition sizes, and the size of the segments that each contains, are set by the application. When a segment is requested, the application must specify the number of the partition to which it belongs.
For interprocess communication, both systems provide a number of services; we've standardized on the use of mail. In each system, a task may send mail (in the form of a segment) to a mailbox, which is a FIFO queue. (In VRTX, such a mailbox is called a queue ; VRTX has a more restricted interprocess structure that it calls a mailbox.) A task may read from the mailbox or, if there's no mail to read, wait at the mailbox until something arrives. In VRTX, the sender provides the identity of the mailbox, a pointer to the segment being mailed, and the usual status return variable. iRMX has one additional parameter: the identity of the mailbox to which any response to the message should be sent.
We're glossing over a number of significant details, including those relating to how such objects as tasks, memory partitions, memory segments, and mailboxes are created, typed, and named; we have equivalent versions of our tools for those services as well. It's important to note that the tools we'll be describing weren't designed to provide a common interface to these two operating systems. In fact, we developed these tools solely to support our work with iRMX and PL/M at a time when we had no serious thought of using any other operating system or language in our work. That our approach served us well when the use of other operating systems and languages became mandatory was icing on the cake.
One other area in which iRMX and VRTX differ is in their provision of debugging services. The VRTX debugger, TRACER, is fairly typical, providing breakpointing at addresses and the ability to examine and modify memory; it does provide some reports on the status of tasks and other system resources. (Ready Systems' RTScope debugger has more features than TRACER but wasn't available when we began working with VRTX.)
By contrast, the iRMX Dynamic Debugger was specifically oriented toward system objects. For example, rather than use it to break at an address, the developer could cause a break on such system events as mail being sent to a mailbox. One of the great virtues of this debugger is that it knows which segments of dynamic memory are allocated and can list them upon demand. The iRMX system also lets the application assign character strings (which become known to the debugger) to identify such system objects as tasks and mailboxes. (We sadly note that this debugger, which we used extensively on our 8086 projects, has been eliminated from the 286 and 386 versions of iRMX.)
Whenever embedded systems are discussed, the hobgoblin of efficiency is raised. The systems we work with are indeed real-time systems. We've never discovered any negative impact on application efficiency resulting from the use of the tools presented here. We are, however, fully confident that they've trimmed weeks off the length of the projects on which we've used them. It's our philosophy to get an application working and then, using instrumentation tools, discover any efficiency bottlenecks and correct them. Of course, some applications can, through design and analysis, be proven to be so close to the wire that any additional overhead might compromise the success of the project. Such applications, we suspect, have no business using a real-time operating system in the first place.
Another area that must be considered in embedded systems work is that of the interfaces to other processors. The tools we discuss here were visualized in the context of processing on a single board, although we've since migrated them so that all the processors within an enclosure use them. Still, our processors talk with other devices ranging from PCs to special-purpose equipment provided by other vendors, and we've had no difficulties in making the necessary adaptations. To simplify the presentation, though, we'll ignore those special (and generally trivial) problems.
In the abstract, a mail service need only provide the ability to send a memory segment to a mailbox. Our mail-service tools provide a more flexible collection of capabilities based on (and certainly affecting) the common approach to application design.
In some systems, a task may exist only to receive and process a single type of data. We have such tasks, of course, but most of our tasks need to handle a variety of messages. In some cases, these contain data or control information provided to the task; in others, they're requests for information from the task.
Most of our tasks are implemented as large case statements, as shown in Listing 1. After the necessary initializations, each task enters a loop in which it waits for mail at a mailbox. When the mail is received, the task examines it to determine what kind it is and takes the necessary actions. Modifications to this approach, such as those where a task may have to monitor more than one mailbox, have been fit easily into the basic scheme.
It's natural to include a code with each item of mail to define the nature of the mail. In cases where the mail contains control information, this code is the only data in the mail. To aid in creating distinct mail codes and in debugging, we assign mail codes in a way that makes it easy to identify both the sender and the recipient. For example, where the number of tasks permits, we use one hexadecimal digit to identify the sender, one to identify the recipient, and two to distinguish various mail codes between this particular pair of correspondents.
When sending mail, it's necessary to specify the mailbox to which the mail is being sent. As Intel observed in creating iRMX, it's often convenient to specify a return mailbox as well. In most systems–and certainly ours–this can be viewed as redundant information. Since we use different mail codes for different sender-recipient pairs, a task receiving mail knows from the nature of the mail whether it requires a response and, if so, to whom. Nevertheless, we've adopted the standard of providing a return mailbox to aid in readability and modifiability of the code.
To send mail from one task to another, we must create (request allocation of) a segment. From the point of view of the operating system, this is reasonable and necessary–how else is the information to be transmitted? From the point of view of the application, however, it's largely irrelevant. Suppose task A wishes to send a single item of control information to task B. Neither task cares how the information is transmitted, so long as transmission occurs. A fundamental principle of structured programming is that the structure of the code must match the structure of the problem; in this case, segment creation to transfer information isn't part of the problem and shouldn't clutter up the code.
Instead, we created an interface routine called Send_Mail to hide the mechanism of information transmission. This routine has a number of arguments, including the mail code, the mailbox to which the mail is being sent, the return mailbox (or a NULL if no return is required), and the address of the segment containing the actual data being transmitted, if any (other arguments will be introduced later). Send_Mail creates a short segment that contains its arguments, suitably packaged, and mails that segment to the designated mailbox. Figure 1 shows how this operates in two eases, one in which only a mail code is being sent and a second in which other data and a return address are being sent along with it.
Having established the principle of creating such an envelope when sending mail, it was natural to ask what else it should contain. We decided to add codes that identify the sending and receiving tasks. This turned out to be superfluous, since the mail code uniquely identifies both. Still, it has its advantages in documenting the code and, as we shall show, in debugging.
Note that with this scheme all mailed items have the same size and data type. Uniform typing simplifies the code needed by the receiver mail to determine what's been received.
A task doesn't call directly on operating system services to receive mail. Unpacking and deletion of the envelope aren't relevant to the application; instead, the task calls Get_Mail . This function has two input (by value) arguments: the identifier for the mailbox and the time the task is willing to wait (usually forever). Get_Mail returns the mail code, data segment pointer, return mailbox, and task codes. It then deallocates (deletes) the segment containing the envelope.
An unexpected advantage of using an envelope to convey information surfaced during the debugging of Send_ Mail and Get_Mail under iRMX. It turned out that some of the envelopes created by Send_Mail weren't being deleted. Examining the segments on iRMX's list of allocated segments, we immediately saw the task and mail codes in each, allowing us to quickly trace the source of the problem.
As a result, we decided to add task and mail codes to all segments created by the application. Every data type definition begins by reserving two words at the front of the data. These two words are filled in automatically by Send_Mail from the mail code and recipient task code if the segment happens to be mailed between tasks. Whenever there are undeleted segments, we can usually identify them quickly by examining their first two words. The only time this method fails is when a mailed segment contains a pointer to another segment; Send_Mail doesn't fill in the mail and task codes in the segment pointed to.
Listings 2 and 3 show data declarations in PL/M and C that explicate the use of standard headers. Note that although we never found it necessary to add the sender's task code to the standard header, it would have required just a few minutes' work.
With a memory-management scheme such as the one provided by iRMX, where all segments are allocated from a single pool, there's no obvious reason to interpose a layer between the application and the operating system. We chose to do this with iRMX in part to adhere to a philosophical position and in part to provide a single place we could go for debugging in case of difficulties. This decision stood us in good stead when we decided to migrate to VRTX.
In VRTX, the application program defines a collection of pools of segments; in VRTX jargon, these pools are called partitions. All the segments allocated from a given partition are the same size. To request a segment from iRMX, the application must provide the size of the segment; to request a segment from VRTX, the application instead provides the identifier for the partition containing segments of the desired size.
The cleanest approach would be to have an interface routine, Create_Segment , whose single argument (other than the ubiquitous status return) is the size of the segment being requested. Create_ Segment could, for a system like VRTX, map the size into the appropriate partition number and place the requisite operating system call. However, for one system we were developing, the language didn't provide a statement that returned the size of a data item analogous to the sizeof operator in C. To accommodate this system, we were constrained to passing the partition size directly from the application program to Create_Segment .
For the system in question, the partition numbers had to be specified directly in the code, an example of the abominations that can be forced on us by the choice of improper tools. However, the approach we took turns out to have a positive aspect: it allows us to use special codes for special partitions. For example, if we wanted to do our own memory management out of dual-ported RAM, we could assign that partition an identifier not known to the operating system and let Create_Segment handle management for that partition. This capability requires that partition numbers be provided by the application.
For other languages, such as C and PL/M, we instead have a routine called Choose_Partition . It takes as its argument the size of a data type and returns the identifier for the partition whose blocks are just large enough to hold the data. Choose_Partition bases its choice on the same constants used by the application to define the partitions. The call to Choose_Partition is used directly as an argument to Create_Segment .
When deleting a segment, the situations are similar: a single-pool system needs only the segment itself, while a multiple-fixed-size-partition approach may require that the partition identifier be provided as well. Here we wrote a routine, Get_Partition , that took as its argument a segment pointer and returned the partition from which the segment was allocated. Get_Partition functions by using the partition address boundaries known to the application (since it must define them) and determining which pair of boundaries the given segment fits in.
Debugging services come in two flavors: low-level services such as register displays, examining and setting memory, single-stepping, and setting breakpoints, and system-oriented services such as observing the status of tasks, the contents of mail-boxes, or the list of allocated segments. As we proceed from unit test to system integration, the services we need change from low-level to system-oriented.
We usually do some unit testing in a PC environment. For iRMX systems, the operating system can't be present. In this case, we use our own shell to provide debug services. For example, we can use Get_Mail to provide any message that might be needed as input to a task. The task sees its actual interface, and only the shell has to be modified. (We believe it's fundamentally important that code, especially interfaces, not be changed to accommodate debugging.) When writing to VRTX, which can run on MS-DOS systems, we use the actual environment and have TRACER available.
To test single tasks or even small subsystems when we move to the target systems, we run under the target operating system and shell and use a special debug task. This task, Junk_Mail , provides us with a way of sending mail that in the complete system would originate from outside the unit or subsystem under test. It's an independent task, so it can be deleted from the final system build without affecting any operational code. The task itself contains a small segment that tests a variable called Item periodically. When Item becomes nonzero, a mail message is created and sent to some task in the system under test. Listing 4 shows a skeleton for a Junk_Mail task. Setting Item to a nonzero value can be done from an emulator or a debug terminal.
During debugging, the iRMX debugger provides information on memory management for the tasks under test. As discussed earlier, iRMX can list all segments in use and we can check that the creation and deletion of segments by tasks (primarily related to mailing information between tasks) are performed as intended. VRTX, however, provides information only on the number of free segments available in each partition.
Here's where Send_Mail and Delete_Segment can help us. Send_Mail identifies each mailed segment with a mail code and the intended recipient. Delete_Segment zeros out these items and writes the deleting task's identification into the recipient's space. A dump of the heap area tells us which segments are active and which have been deleted. As a bonus, if two tasks try to delete the same segment, the source of the problem is readily apparent.
One of the first steps in system integration is verifying that all tasks and mailboxes have been created. iRMX provides a symbolic catalog of all objects known to it; symbols are assigned by the application when the object is created, and we need only examine the catalog for our verification. VRTX tells us only the identification numbers for active tasks and mailboxes. The actual names of objects can be found from these numbers, of course, but sometimes the translation is inconvenient.
Therefore, we've created a catalog that's entirely transparent to the application and hidden in the membrane services. (To implement the catalog efficiently, we used an array structure and were forced to constrain task and mailbox identifiers to integers in a predefined range; this is just about the only constraint placed on complete use of the operating system by our membranes.) The catalog allows us to define and examine symbolic names for objects the application creates. It can be disabled so that it doesn't take up any memory space when we move to an operational system. Again, this requires no changes to the operational code.
Once all the objects have been created, our approach is to watch the activity at mailboxes to verify intertask communications. Most of the handling of the communications has been checked out in unit testing. iRMX allows us to break when mail is sent to or received from a mailbox; VRTX doesn't provide this service. Again, we used our membranes to augment VRTX debugging services.
We structure Get_Mail , through the use of a subroutine, to provide a distinct return statement for each mailbox. This subroutine is created automatically by a macro as part of the operating system configuration. We can then break on the appropriate return statements to emulate the iRMX facility. In some senses, this is an improvement even on what iRMX provides since we're able to specify a break for only a certain task, whereas the iRMX facility is controlled by specifying mailboxes. By specifying a breakpoint that depends on a given task, we can determine immediately whether or not the right task got a message. For further debugging, the membranes routinely put the values of the mail code and sender into registers for easier access during a break.
We've created and used these and similar tools in developing a number of embedded systems ranging in size from 40,000 to 160,000 lines of code and comprising between five and 40 intercommunicating tasks. We used PL/M, C, Pascal, and assembler for the bulk of our implementations.
We must stress that our primary motivation in developing these tools was to hide nonessential details from the application, not to enhance transportability; of course, like maintainability, transport of code is always a background consideration. The value of hiding details, generalizing interfaces, and making code self-documenting is well proven and needs no defense from us; we can, however, discuss some additional advantages these tools have offered.
At one point, CNR was required to significantly enhance a fielded system of about 75,000 lines of PL/M code that ran on an 8086. The projected size of the application and other considerations led us to choose an 80286 processor and to use VRTX. The conversion from iRMX to VRTX was substantially complete and running within a month; very little was required beyond rewriting the membrane tasks such as Send_Mail and Get_Mail . In fact, much of that one-month effort was devoted to I/O and interrupt services that weren't handled through the operating system.
These tools would make it just as easy to move from VRTX to iRMX. In fact, the only place there might be a significant interface problem is in the segment-handling membranes, Create_ Segment and Delete_Segment , since they currently require partition numbers. But the apparent difficulty disappears under closer scrutiny, and it's only necessary to change the Get_Partition and Choose_Partition routines to return the input size instead of a calculated partition number to satisfy iRMX.
We're confident that these membrane routines would allow us to move smoothly to any of the major real-time operating systems. We would just as surely use them to define the interface to our own operating system if the company is ever required to write one of its own again.
Porting between languages
Initially, the membrane tasks were written in PL/M since it has a number of constructs designed to take advantage of iRMX features. In fact, on one system developed in Pascal for use with iRMX, we used the same PL/M-based membrane tasks. With VRTX, we've used PL/M, Pascal, and C; not, we hasten to say, all on one project. Here, we chose to implement the membrane tasks in assembler.
As we took this support system from a PL/M-based project and made it available to one using C, the only substantive change we had to make was the order in which the assembler programs took parameters from the stack. Actually, when we moved to VRTX, we adopted a two-layer membrane. The outer (closer to the application) layer is in PL/M for PL/M and Pascal applications and in C for C applications; the inner layer is in assembler.
The tools described here were simple to build. They've proven to be effective, efficient aids to design, development, and debugging. Their design was influenced by our architectural approach and by the tools, languages, and operating systems available to us. The proven principles of our trade–structured programming, information hiding, and data abstraction–argue strongly that a similar approach to insulating the operating system from the application will pay off in almost any development organization.
When this article was written in 1989, Dennis P. Geller was director of software development at CNR Inc. He has cowritten several films and books on structured programming and computer information systems and holds a doctorate in computer and communication sciences. (He now works for Aptima.)
Anita Sanders was a senior project engineer at CNR and has a master's in electrical engineering.