Software Components for Real Time -

Software Components for Real Time


To read original PDF of the print article, click here.

Embedded Systems Programming

Software Components for Real Time
Create your own framework for component-based real-time software without a huge cost, effort, or run-time overhead.
ByDavid B. Stewart

Component-based software helps you get a system working quickly, keep costs down, and reuse the most robust software from prior applications. This article presents methods for creating your own framework for component-based real-time software without the huge cost, effort, or software overhead associated with using commercial tools that are dedicated to this task. Any C programming environment can be used to create components with minimal increase in CPU or memory usage. The discussion will focus on techniques for modular decomposition, detailed design, communication, synchronization, scheduling, I/O drivers, and real-time analysis. The solutions can be implemented as a layer above your favorite RTOS, or stand-alone for performance- and memory-constrained applications that do not use an RTOS. The techniques have been demonstrated on a variety of microcontrollers and general-purpose processors. They've been used in applications including robotics, locomotive control, amusement devices, consumer electronics, and satellite modems.

A component-based software paradigm can be used effectively in the design of embedded real-time systems to provide advantages such as software reuse, improved maintainability, reconfiguring software on the fly, and ability to easily fine-tune a real-time application's timing properties. A more detailed discussion of the advantages to using component- based software is given in.[15]

In this paper, we present techniques for developing the solid framework needed to support component-based software, using the port-based object (PBO) abstraction of a component. The techniques do not require any special commercial CASE (computer-aided software engineering) tools, and are compatible with most integrated development environments and RTOSes. For low-end processors without an RTOS, the methods can also be implemented using the dynamic scheduling real-time executive that is described in this article.

Some people believe that software reuse in embedded systems is near impossible; nothing can be further from the truth. It does not take tremendous experience or knowledge. Rather, the most important quality needed by the software designer and programmer to create reusable component-based software is discipline. This article provides the details on how to create such a software system; the discipline is required to follow some of the rules. The rules do not limit what can be done-they do limit how it is done-to ensure that the software can be reused. To enforce the discipline, formal design and code inspections should be performed at each step during the design and implementation phases.[1] Time spent on these reviews can easily save five to 10 times as much time debugging, both before and after deployment.


Modular vs. reconfigurable software
Modular software is characterized by many guidelines, which include a simple structure, data encapsulation, functional and informational cohesion, separation of the interface specification, and the internal behavior implementation.[10,11,14] The degree of modularity refers to a subjective measurement that describes the extent to which a software module follows these guidelines. For example, a system decomposed into modules may be classified as “somewhat modular” or “highly modular,” depending on a software engineer's assessment of how well the module meets the defined criteria.[3]

Reconfigurable components are modular components with the highest degree of modularity. Most important, they are modules designed to have replacement independence. In a modular system, there is often only one way to piece all the components together, because the interfaces of modules that need to be integrated are designed according to the other modules they interact with. For example, if a C or C++ module is written and a .h file of another module is #included , then the module becomes dependent on the interfaces of that other module. In contrast, interface specifications for reconfigurable components are designed according to a pre-defined standard, not according to the interfaces of other modules with which it will be integrated. Interaction between components occur through these standard interfaces only.

By the above definitions, software components designed according to the PBO model are reconfigurable, because they are modular and have replacement independence.

Generic vs. reconfigurable software
Reconfigurable software does not necessarily imply generic software, for which it is sometimes mistaken. It is possible to have both hardware- and application-dependent components that are not generic, but are reconfigurable. Classifications of reconfigurable software components are defined in this section.

A generic component is a module that is neither hardware dependent nor application dependent. The component can be configured for different types of hardware, and can be used in different applications.

Hardware-dependent (HD) components are software modules that can only be executed when specific hardware is part of the system. HD components can be of two types: interface components and computation components.

HD interface components are used to convert hardware-dependent signals into hardware-independent data, such that other components can interface with these modules. The HD interface components replace standard I/O device drivers, and provide an interface to application hardware such as robotic actuators, switches, sensors, and displays. They differ from RTOS I/O device drivers because as processes with their own thread of control, they have the same standard interface as other software components, rather than being defined as system calls that are called through the operating system. The difference between our device driver model and the traditional module is illustrated in Figure 1. An extensive study of this driver model is given in an article by M. Moy and myself, as found in the Real-Time Symposium Proceedings.[9]

HD computation components provide similar functionality as generic components, but with better performance or added functionality, due to hardware-specific optimizations or modifications of the generic component. Unlike the interface components, they do not communicate directly to hardware; they are simply dependent on having specific hardware as part of the system. Rather, they interface to the HD interface components through their input and output ports.

Application-dependent components are modules used to implement the specific details of an application. As the name implies, these components are not reusable across different applications. Ideally, these components are eliminated, since they must be redeveloped for each new application. Modules initially defined as application-dpendent components, however, can often be transformed into generic components if an algorithmic abstraction of the module's functionality is possible; this would result in hard-coded information being converted to variable data. The configuration data can then be obtained from the user through a tele-operating device or keyboard, from a previously stored configuration in non-volatile RAM or EPROM, from a file, or from an external subsystem, depending on the capabilities offered by the target hardware.

Port-based object model
The model of a software component described in this article is targeted specifically to embedded real-time control systems. However, the techniques to create a framework and reusable objects that plug into the framework can be applied to other applications as well.

The model is based on domain-specific elemental units to maximize usability, flexibility, and real-time predictability. A framework is designed that uses these elemental units as building blocks to incrementally create larger, more complex applications.

There are two distinct aspects to integrating components. One is to integrate the data paths from an architectural perspective, as described in this section. The other is to integrate the code through use of a framework process and objects that “plug in” to the framework.

The independent process is the elemental process model that underlies the software component.1 An independent process does not have to communicate or synchronize with any other component in the system, making integration simple. A system that is composed only of independent components, however, is very limiting, because no means exist to share data or resources. Nevertheless, this extreme emphasizes a desire to keep the pieces of the application as self-contained as possible, by minimizing the dependencies between components. The less dependencies a component has, the simpler it will be to integrate it into the system.

Streenstrup, Arbib, and Manes formalized the algebra of independent concurrent processes with their port-automaton theory.[13] They model a concurrent process as an independent automaton that operates on the state of the environment. The communication between objects is based on a structured blackboard design that operates as follows.

When a process needs information, it obtains the most recent data available from its input ports. This port can be viewed metaphorically as a window in your house; whatever you see out the window is what you get. There is no synchronization with other processes and there is no knowledge as to the origin of the information that is obtained from this port.

When a process generates new information that might be needed by other processes, it sends this information to its output ports. An output port is like a door in your home; you can open it, place items outside for others to see, then close it again. As with the input ports, there is no synchronization with other processes, nor do you know who might look at the information placed on the output ports.

In addition to the independent process, the object is selected as an elemental software abstraction. As stated by Wegner, an object is the atomic unit of encapsulation, with operations that control access to the data. [21] The term object does not imply “object-oriented design,” which is an extension to objects to include polymorphism and inheritance. The references to objects in this article are classified as object-based design, as defined by Wegner's distinction of that term and object-oriented design. [22] Note that objects without inheritance and polymorphism are in effect abstract data types (ADTs), and are easily implemented in C; C++ is not necessary.

The algebraic model of a port automaton and the software abstraction of an object are combined to create the PBO model, as depicted in Figure 2. A PBO is drawn within a data-flow diagram as a round-corner rectangle, with input and output ports shown as arrows entering and leaving the side of the rectangle. Configuration constants are drawn as arrows entering/leaving the top of the rectangle. Resource ports are shown as arrows entering/leaving the PBO from the bottom.

A PBO executes as an independent concurrent process, whose functionality is defined by methods of a standardized object. In C, the objects are implemented as ADTs. Communication with other modules is restricted to its input ports and output ports, as described above. The configuration constants are used to reconfigure generic components for use with specific hardware or applications.

In addition to input and output ports, we also define resource ports, which are needed to create an environment for multi-sensor integration. The resource ports are for modeling only to show the source or destination of data that is exchanged with I/O hardware. In practice, the resource ports are implemented in a hardware-dependent manner, as the reads and writes of the I/O hardware's registers. The resource ports connect to sensors and actuators, allowing the PBO model to be used to replace the more traditional POSIX style of device drivers. Details of accessing the sensor or actuator are encapsulated within the PBO, resulting in an HD interface component.

Modelling PBOs to have optional configuration constants and resource ports allows the use of the same PBO model for different types of components. A sample library of PBO objects for robotic manipulators is shown in Table 1. The library represents a subset of PBOs that were created in a robotics laboratory at Carnegie Mellon University. [19]

An important note about the functional descriptions of the modules is that the framework is designed independent of the granularity of functionality in each PBO. The granularity is defined by the software architect who decomposes an application into modules; the framework then provides the mechanisms for quickly realizing each of these modules by using the PBO model to implement them as reconfigurable objects.

Similarly, the framework does not define the type nor semantics of the port variables. A variable type mechanism is used so that data transmitted over the ports can be any type. For example, it can be raw data, such as input from an A/D converter; processed data, such as positions and velocities; or processed information, such as structures describing types and locations of objects in the environment. The names of the ports are configurable, and specified during the initialization of the system.

As defined by Dorf, “a control system is an interconnection of components forming a system configuration which will provide a desired system response.” [4] Each component can be mathematically modeled using a transfer function to compute an output response for any given input response. The port-automaton theory provides an algebraic model for these types of control systems. By incorporating the model into the PBO, the PBOs provide a model suitable for control engineers. PBOs are configured to form a control system in the same way that a control engineer configures a system using transfer functions and block diagrams. This approach allows the framework to satisfy an important criterion: to make it easy to program for a target audience of control engineers who do not have extensive training in software engineering or real-time systems programming.

A configuration is a set of PBOs that are interconnected to provide the required open-loop or closed-loop system. A configuration is valid only if for every PBO selected, any data that it requires at its input ports is produced by one of the other PBOs as output. As per the port-automaton theory, the control engineer does not have to be concerned with how data gets from the output of one PBO to the input of another PBO. The communication is embedded in the framework, such that it is transparent to the control engineer. A configuration also cannot have two PBOs that produce the same output, otherwise a conflict may arise as to which output should be used at a given time.

Port names are used to perform the bindings between input and output ports. Whenever two PBOs exist with matching input and output ports, the framework creates a communications link from the output to the input. If necessary, the output can be fanned into multiple inputs. Our framework uses an internal/external name separation for the ports, such that the name used to code the PBO can be independent of the name used for linking that object to other PBOs.

Configuration examples
The flexibility of creating applications using software components is demonstrated in this section. Details of designing individual PBOs are given in a following section.

Cartesian control of the RMMS
Figure 3a shows a configuration, using modules from our sample library shown in Table 1, to perform teleoperated Cartesian control of a reconfigurable modular manipulator system (RMMS).[12] The configuration of this robot is not known beforehand. Rather, its configuration is read from EPROMs embedded in the robot during initialization. From that configuration, the rmms module outputs its hardware configuration via the NDOF (number of degrees) and DH (Denavit-Hartenberg parameters-a method of mathematically specifying the shape of a robot) configuration constants. The constants are used as input to the gfwdkin and ginvkin modules that are configured for any robot based on NDOF and DH. [6] A teleoperation interface is provided by the 6-DOF trackball, and the cinterp module is used to generate intermediate trajectory points for the robot, because the tball module typically executes at a much lower frequency than the other modules.

The software framework does not pose any constraints on the frequency of each PBO. Rather, as defined by the port-automaton theory, every PBO is an independent, concurrent process that can execute at any frequency. Whenever that process needs data from its input ports, it retrieves the most recent data available. When it completes its processing, it then places any new data onto its output ports.

A configuration can be executed in either a single- or multi-processor environment. In a multi-processor environment, the control engineer only needs to specify which processor to use for each PBO. The communication between PBOs and synchronization of their processes is otherwise identical, and fully transparent to the control system engineer.

Cartesian teleoperation of a Puma 560
Suppose that a Puma 560 robot is to be used instead of the RMMS. The rmms module can be replaced with the puma robot interface module, as shown in Figure 3b. Since the Puma is a fixed configuration robot, its NDOF and DH parameters are constant. Instead of reading these values from the robot, they can instead be hard-coded into the puma module, and output as configuration constants. There is no need to change any other module, since the gfwdkin and ginvkin modules will configure themselves during initialization for the Puma based on the new values of NDOF and DH.

Improving performance of a Puma 560
Generic components are useful for enabling rapid prototyping, but they may not always be computationally efficient. For example, the generalized computation of the forward kinematics (module gfwdkin) is based on the DH configuration constants and using matrix operations. This will naturally be slower than performing similar computations for a specific robot, such as the Puma 560, where the DH parameters are constant. Unnecessary computations (such as multiply by zero or one, or computing sin(p/2)) can be eliminated.

An HD computation component can be created to improve the performance of an application. The pfwdkin and pinvkin modules are examples of such components. They compute the forward and inverse kinematics specifically for a Puma 560, and they execute faster than their generic counterparts. It is then desirable to replace gfwdkin with pfwdkin, and ginvkin with pinvkin, as shown in Figure 3c, whenever the puma HD interface component is used.

In order for an HD computation component to replace a generic component, it must provide at least the same outputs and must not require any additional inputs as compared to the generic component. Even when an HD component is used, it does not eliminate the usefulness of the generic component. For example, in order to improve fault tolerance of an application, the generic component can still be used as a standby module, or as shown in Figure 3d, it can execute in parallel with the HD computation component, albeit at a lower frequency, in order to provide consistency checks.

Autonomous execution of a Puma 560
As an example of an application component, suppose that a custom autonomous trajectory module ctraj is created to replace the teleoperation module tball, as shown in Figure 3e. The component can be integrated into the system by defining it as a PBO.

Even though a module is application dependent, it does not have to be hardware dependent. If the hardware for the application is changed, the application component does not necessarily have to change. Figure 3f shows this by replacing puma with rmms, but not changing the trajectory of the robot's end effector, as defined by ctraj.

Overview of framework
Creating code using the PBO methodology is an “inside-out” programming paradigm as compared to traditional coding of real-time processes, as shown in Figure 4.

The traditional approach is used by most current RTOSes. Processes are created, each with their own main() (or equivalent function name). The process executes user code and controls the flow of the program. It invokes the RTOS, typically via a system call, whenever an RTOS service is required. RTOS services include communication, synchronization, programming timers, performing I/O, and creating new processes. Because execution is under the control of the user, it forces the user to be responsible for all of the resource management, such as scheduling, communication, and synchronization.

Instead, to achieve the consistency needed to support component-based software, the RTOS must serve as the resource manager. The RTOS has control of execution at all times, and performs the communication, synchronization, scheduling, and process management in a predictable manner. Only when necessary, the RTOS invokes a method of one of the software components to execute application code.

In this section, we provide more details on the framework. In particular, we show how to create a framework for both periodic and aperiodic tasks, in both non-preemptive and preemptive environments. Typically a non-preemptive approach is used for low-end processor environments that cannot afford the overhead of a full RTOS. The framework for the preemptive approach can be implemented as middleware, to operate with your favorite RTOS. The difference in the approaches is illustrated in Figure 5. The non-preemptive case has only a single context, and each object is plugged in as it is needed. For the preemptive case, the context of the framework is replicated for each task, and a PBO is bound to that framework for the lifetime of the task. The details of the framework follow in the remainder of this section.

The framework process
Component-based software support is realized by creating a single, standard process that we call the framework process (pboframe). Both periodic and aperiodic processes in the system use this same framework. The process pboframe takes a PBO as an argument. The PBO defines the module-specific code, including the input and output ports, configuration constants, the type of process (for example, periodic process or aperiodic server), and the timing parameters such as frequency, deadline, and priority.

The framework process implements a finite state machine with three states, as shown in Figure 6. The states are shown as bold ellipses, and are NOT_CREATED, ON, and OFF. Extensions to include an error state can be found in [19]. State transitions are shown in the diagram as process flow diagrams. A state transition is triggered by a signal (drawn as solid bars). Signals may originate from interrupts, a planning module, an external subsystem, or from the user through a graphical user interface.

In response to a signal, a data transfer is made to receive data from other objects. One of the user-defined functions is then called, followed by another transfer to send data to other objects.

The PBO method that is called depends on the state of the process and the signal that is received. For example, if a PBO is in the ON state, and it receives a wakeup signal, then it will execute the cycle method and remain in the ON state. On the other hand, if the PBO is in the ON state, and receives the kill signal, then it will execute the off method, followed by the kill method, then enter the NOT_CREATED state.

The framework process, as shown, evolved over several years as we designed and tested many variations, in order to obtain a common program structure for all software components. The diagram represents the most recent revision in the evolution of the structure. Many design decisions are implied by the detailed PBO framework, as now discussed.

Despite the seeming complexity of the framework, dissecting it into pieces shows that it is indeed rather simple. In the steady state, PBO processes are all in the ON state, executing their cycle method once per cycle or event, and going back to sleep until the next wakeup signal. Note that the only difference between a periodic process and an aperiodic server is the source of the wakeup signal. For a periodic process, the wakeup signal is received from the clock, as a virtual timer interrupt. For the aperiodic processes, the process blocks on a semaphore, message, or event, as defined by the sync method of the PBO.

The autonomous nature of the PBO allows the most popular scheduling algorithms, such as the rate monotonic static priority, [7] earliest-deadline-first, [7] or maximum-urgency-first dynamic priority scheduling [20] algorithms, to be used to schedule PBOs. The user can also choose between a preemptive or non-preemptive environment, and still use the PBO model. The control systems designer only needs to specify the frequency of the cycle routine for each PBO based on the needs of the application.

By using the timing error detection and handling methods described in an article called “Mechanisms for Detecting and Handling Timing Errors” by myself and P. K. Khosla, [16] aperiodic servers can use the same fundamental structure as periodic processes. The framework can define aperiodic processes as either deferrable or sporadic servers, and use them with either the rate monotonic static priority or maximum-urgency-first dynamic priority scheduling algorithms to ensure predictable scheduling.

The remainder of the framework handles the initialization, termination, and reconfiguration. To support dynamic reconfiguration, a two-stage initialization and termination is used. High-overhead initialization of a new process can be performed upon system start-up in preparation for being activated. The initialization includes creating a process's context, dynamically allocating its memory, binding input and output ports, and calling the user-defined init method. The process then waits in the OFF state, and can be viewed as being in a standby mode for a dynamic reconfiguration. When an ON signal is received, the local table is updated to reflect the current state of the system, and execution begins.

Implementation of the framework The framework is defined in two parts: header information including data structures and function prototypes that enforce the interface between the PBOs and the framework, and the code that executes the FSM that was shown in Figure 6.

We provide a sample of the framework header file in Listing 1; this code would go in the file pbo.h, to be #included by every software component. Note that this framework code only needs to be written once, and can be used from one application to another.

The header file defines the pboFunc_t structure that will contain a pointer to each of the functions of the PBO. PBO_MODULE(xyz) is a macro that expands into all of the declarations needed by the software component, and allows the compiler to enforce the API of the component. The expansion of this macro is one of the keys to creating component-based software, and is illustrated in more detail in Figure 7.

On the left side of this diagram, the framework process only knows about components based on their functions. In this example, the object “car” is plugged into the framework. To do so, the structure carFunc needs to be defined. This structure is defined by placing PBO_MODULE(car) in the declarations part of the file car.c. The right side of the diagram shows the generic form of the PBO_MODULE macro, and how substitution of xyz for car results in the desired declarations (## is the C preprocessor token pasting operator; it is part of ANSI C). The macro also defines the function prototypes for each method, so that when the code is compiled, if the user's code does not match what is expected by the framework, compiler warnings or errors will be generated.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.