Doing design and debug on real-time distributed applications

Bob Kindel, Real-Time Innovations

January 31, 2008

Bob Kindel, Real-Time InnovationsJanuary 31, 2008

Real-time system designers and embedded software developers are very familiar with the tools and techniques for designing, developing and debugging standalone or loosely coupled embedded systems. UML may be used at the design stage, an IDE during development and debuggers and logic analyzers (amongst other tools) at the integration and debug phases.

However, as connectivity between embedded systems becomes the norm, what used to be a few nodes connected together with clear functional separation between the applications on each node, is now often tens or hundreds of nodes with logical applications spread across them.

In fact, such distributed systems are becoming increasingly heterogeneous in terms of both operating systems and executing processors with tight connectivity between real-time and enterprise systems becoming the norm.

This article will identify the issues of real-time distributed system development and discuss how development platforms and tools have to evolve to address this challenging new environment.

The idea of a 'platform' for development has long pervaded the real-time embedded design space as a means to define the application development environment separately from the underlying (and often very complex) real-time hardware, protocol stacks and device drivers.

Much as the OS evolved to provide the fundamental building blocks of standalone system-development platforms, real-time middleware has evolved to address the distributed-systems development challenges of real-time network performance, scalability and heterogeneous processor and operating system support.

And as has already happened in the evolution of the standard real-time operating system, new tools are becoming available to support development, debug and maintenance of the target environment " in this case, real-time applications in large distributed systems.

The Distributed-System Development Platform
From the individual application developer's perspective, there are three basic capabilities which must be provided by an application development platform when a logical application spans multiple networked computers:

1. Communication between threads of execution
2. Synchronization of events
3. Controlled latency and efficient use of the network resources

Communication and synchronization are fairly obvious distributed platform service requirements and are analogous to the services provided by an OS. However for distributed applications they have to run transparently across a network infrastructure of heterogeneous OS's and processors with all that implies in terms of byte ordering and data representation formats.

It should ideally use a mechanism that does not require the developer to have an explicit understanding of the location of the intended receiver of a message or synchronizing thread so that the network can be treated as a single target system from an application development perspective.

Typically a user will use a commercial or home-grown middleware to provide these key capabilities. There are several middleware solutions which support this approach, such as JMS and DDS (Data Distributions Service) from the Object Management Group (OMG).

Figure 1. DDS provides a framework for providing controlled latency and efficient use of target network resources.

But only solutions such as DDS (Figure 1, above) explicitly address the third point; controlled latency and efficient use of (target) network resources, which is a critical issue in real-time applications. DDS provides messaging and synchronization similar to JMS, but additionally incorporates a mechanism called Quality of Service (QoS).

QoS brings to the application level the means to explicitly define the level of service (priority, performance, reliability etc) required between an originator of a message or synchronization request, and the recipient.

DDS treats the target network somewhat like a state machine, recognizing that real-time systems are data driven and it's the arrival, movement, transition and consumption of data that fundamentally defines the operation of a real-time system.

Some data is critical and needs to be obtained and processed within controlled/fixed latencies, most especially across the network. Moreover, some data need to be persisted for defined periods of time so it can be used in computation; other data may need to be reliably delivered but is less time critical. QoS facilitates all these requirements and more.

Perhaps the greatest advantage of using middleware isn't often appreciated until late in the application development process: defining interfaces in a rich middleware format makes it much easier to integrate, debug and maintain a system. What good middleware does is allow you to completely specify the data interaction through quality of service which forms a "contract" for the application.

DDS, for example, allows a data source to specify not only the data type but also whether the data is sent with a "send once" or "retry until" semantic, how big a history to store for late arriving receivers, the priority of this source as compared to others, the minimum rate at which the data will be sent, as well as many, many more possibilities.

By setting these explicitly many of the soft issues that creep up in integration can be addressed quickly by matching promised behavior to that requested. DDS middleware will even provide warnings at runtime when contracts aren't met.

< Previous
Page 1 of 3
Next >

Loading comments...