OMG Data-Distribution Service brings real-time to the Data-centric Publish-Subscribe model -

OMG Data-Distribution Service brings real-time to the Data-centric Publish-Subscribe model

Many real-time applications have a requirement to model some of their communication patterns as a pure data-centric exchange where applications publish (supply or stream) “data” which is then available to the remote applications that are interested in it.

These types of real-time applications can be found in C4I systems, industrial automation, distributed control and simulation, telecom equipment control, and network management.

Of primary concern to these real-time applications is the efficient distribution of data with minimal overhead and the ability to control Quality of Service (QoS) properties that affect the predictability, overhead, and resources used.

The OMG Data-Distribution Service (DDS) has standardized on a data-centric publish-subscribe model to solve this situation. The specification also defines the entities, operations and QoS an application can use. And in June 2004, the Object Management Group finalized the Data-Distribution Service Specification for Real-Time Systems.

This is the most significant addition to the portfolio of specifications addressing the needs of real-time systems that OMG has made in recent years. The finalization was eagerly awaited by programs including the Navy Open Architecture, DISR, and Boeing FCS who had already selected DDS as the critical data-distribution piece for their net-centric strategy.

The DDS standard unifies some of the best practices present in successfully deployed real-time data-distribution middleware such as NDDS from Real-Time Innovations and SPLICE from THALES.

In essence, DDS creates the illusion of a shared “Global Data Space” populated by data-objects that applications in distributed nodes can access via simple 'read' and 'write' operations.

In reality, the data does not really “live” in any one computer's address space. Rather it lives in the local caches of all the applications that have an interest in it. Here is where the publish-subscribe aspect becomes critical.

Figure 1: Overall DCPS model

Publish-Subscribe and Data-Distribution
The classic distributed shared-memory model allows applications to access data remotely using simple read and write operations. However, these architectures don't scale beyond SMP computers or tightly-coupled clusters.

The reason is that the random-access semantics of memory and the implied totally-reliable “instantaneous” response cannot be implemented transparently in a LAN or WAN where computers can join and leave and communication links can have sporadic faults. Hiding all these details from the application is not practical—the model is simply not a good fit for the physical realities of a distributed system.

The Publish-Subscribe model has steadily gained popularity in the context of distributed-systems programming. The critical concept behind Publish-Subscribe is very simple. Applications must be programmed in such a way that the declaration of information-access intent—that is what the application wants to do—is separate from the information-exchange itself. This separation allows the middleware to 'prepare itself' reserving all the needed resources such that the information access can be as efficient as possible.

In the context of data-distribution the Publish-Subscribe approach means that the application must declare its intent of writing data and specify which data objects it whishes to write (i.e. define its publications) and similarly it must declare its intent to read data and specify which data objects it intends to read (i.e. define its subscriptions) before it actually writes or reads the data itself.

Figure 2: Triggering Real-time Publish-Subscribe

An essential aspect of any publish-subscribe system is how the applications identify what they intend to publish or subscribe. Most publish-subscribe systems use a combination of an application-selected label (called a “topic”) and a filter on the content of the data itself. DDS uses a slightly more powerful approach that combines Topic, Content, and a special field (the key) to identify each data-objects in the Global data-space.

The DDS specification was developed with the needs of real-time systems in mind. To this end, it defines a comprehensive list of Quality of Service (QoS) parameters that allow the application to finely-tune the resources used by the system. In addition, the API includes the programmable callback mechanism designed to provide very high performance in terms of latency and throughput. The DDS model also supports event propagation and messaging by means of special QoS settings. In that sense, the Data-Distribution model subsumes the capabilities of many event-distribution and messaging models.

Addresses in the Global Data-Space
In data-centric systems, the information exchanges refer to values of an imaginary global data object. Given that new values typically override prior values, both application and middleware need to identify the actual instance of the “Global data object” the value applies to.

In other words, a publisher writing the value of a data-object must have the means to indicate uniquely the data object it is writing. This way, the middleware can distinguish the instance being written and decide, for example to keep only the most current value.

DDS uses the combination of a Topic object and a key to uniquely identify data-object instances. The representation and format of the key depends on the data type. However, since a Topic is bound to a unique type, the service can always interpret the key properly given the Topic and the value of a data object.

Figure 3: Tracking Instances in DCPS

The combination of a fixed-type Topic and a key is sensible for data-centric systems because the Topic represents a set or related data-objects that are treated uniformly (e.g. track information of aircraft as generated by a radar system), where each individual aircraft can be distinguished by the value of a data-field (such as the flight-number) which is interpreted as the key. Alternatively, a Topic can be associated with a unique data-stream (e.g. an Alert) in the case where the Topic does not define any keys.

An important aspect of the DDS standard is the pervasive use of Quality of Service (QoS) to configure the system and the introduction of middleware-brokered QoS contracts between publishers (who offer a maximal level for each QoS policy) and subscribers (who request a minimum level for each QoS policy).

QoS contracts provide the performance predictability and resource control required by real-time and embedded systems while preserving the modularity, scalability and robustness inherent to the anonymous publish-subscribe model.

Using DDS for Data, Message, or Event propagation
The information transferred by data-centric communications can be thought classified into: Signals, States, and Events/Messages.

Signals represent data that is continuously changing (such as the readings of a sensor). Signal publishers typically set the RELIABILITY QoS to 'best-efforts' and HISTORY QoS to either KEEP_LAST.

State-data represents the state of a set of objects (or systems) codified as the most current value of a set of data attributes (or data structures). The state of an object does not necessarily change with any fixed period. Fast changes may be followed by long intervals without change. Consumers of “state data” are typically interested in the most current state.

Moreover, as the state may not change for a long time, the middleware will have to ensure that the most current state is delivered reliably. In other words, if a value is missed, then it is not generally acceptable to wait until the value changes again. State-data publishers typically set the RELIABILITY QoS to 'reliable' and HISTORY QoS to either KEEP_LAST.

Events/Messages represent streams of the values that have individual meaning that is not subsumed by subsequent values. Events/Messages publishers typically set the RELIABILITY QoS to 'reliable' and HISTORY QoS to KEEP_ALL.

DDS: A simpler real-time data model
The DDS standard enables applications to use a much simpler programming model when dealing with distributed data-centric applications. Rather than developing custom event/messaging schemes or artificially creating wrapper CORBA objects to access data remotely, the application can identify the data it wishes to read-and write using a simple topic-name, and use a data-centric API to directly read and write the data.

Gerardo Pardo-Castellote, Ph.D. is co-chair of the DDS Working Groupand Chief Technology Officer of Real-Time Innovations, Inc.

1. OMG Data-Distribution Service for Real-Time Systems.
2. HLA: Distributed Simulation Systems V1.1 Document formal/2000-12-01
3. TIBCO. TIB/Rendezvous White Paper.
4. The Java Messaging Service (JMS) specification.
5. OMG The CORBA Notification Service. Document orbos/98-11-01.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.