Many embedded systems now operate within and depend on webs of wiredand wireless connectivity to each other, to enterprise networks and tothe broader Internet. As a result, real-time embedded system developersface the challenge of interfacing or 'bridging' to the GlobalInformation Grid (GIG) and maintaining deterministic behavior whilemoving large quantities of information over non-deterministic networktransports.
This involves providing an efficient 'data-path' allowing embeddedapplications the ability to communicate efficiently with enterpriseapplications. By combining the data-centric technologies of both theData Distribution Service (DDS) protocols and data base managementsystems (DBMS), a viable architectural strategy has emerged. Thisarchitectural approach can be used (seeFigure 1, below ) to facilitate data-centric communication withService Oriented Architecture (SOA) messaging solutions such as JMS andWeb Services.
By bridging the embedded with the enterprise, valuable real-timetactical information can be shared across the broader electroniccommunity. But it's critical that the act of bridging informationbetween enterprise and embedded systems not compromise the real-timedeterministic performance of the mission critical embedded systems.
|Figure1. RTI Distributed Data Management framework provides embedded toEnterprise (e2E) bridge.|
Network-Centric Computing Model
A key element in providing this critical linkage between the soft andnon-real time enterprise and deterministic, hard real time embeddeddevices is the use of a network-centric computing model thatfacilitates localized management of distributed data as an integralpart of the real-time application that does not rely on a centralserver topology (for scalability reasons).
The model's topology is peer-to-peer, versus client/server, allowingsystem architectures to be designed, from the computing nodesperspective, with no single point of failure (i.e. no central server).Network-centric computing is based on all computing nodes beingnetworked such that real-time middleware abstracts the hardwarespecific details so that the software design doesn't require knowledgeof the underlying network topology. This computing model facilitatesthe design of location transparent software, which directly impactssoftware module reuse.
Within the network-centric computing model, a huge challenge ismanaging the distributed data throughout the entire system. Real-timedata must be captured, stored, retrieved, queried, and managed suchthat the proper information can be quickly accessed by all interestedparticipants within the system.
This data management capability can be viewed as, but is not limitedto, a distributed real-time database where peer-to-peer (P2P)networking and real-time in-memory database management systems (DBMS)are leveraged to provide a solution that manages storage, retrieval,and distribution of fast changing data in dynamic network environments.Figure 2 below provides asimple illustration of the data management architecture. The benefit ofthe distributed database model is that it guarantees continuousreal-time availability of all information critical to the enterprise.
|Figure2. Real-time Distributed Database Architecture.|
As Figure 2 illustrates, the net-centric distributed databasearchitecture is complemented by the support of the leading industrystandards for application programming interfaces, data modeling, datamanipulation, and high performance, data-centric, publish and subscribecommunication, such as ODBC, JDBC, SQL, and DDS. These familiarinterfaces minimize the learning curve and facilitate quicktime-to-market. In addition, the use of standards greatly simplifiesintegration with existing infrastructure solutions.
DDS . The OMG's DDS standard, which isnow a mandated DoD technology within the DISR (previously JTA), isquickly gaining market traction due to it's ability to abstract thecomplexities of the network, facilitate the design of locationtransparent applications, and provide application level control of dataQoS within a publish/subscribe communication paradigm.
By utilizing DDS technology for node-to-node communication, thecomplexity of managing a dynamic network environment, such as ad-hocwireless networks, is removed from the application developer. It'simperative that a network-centric system accommodate network dynamicswithout adversely affecting the computing nodes comprising the overalldistributed system.
SQL/ODBC/JDBC. Today's DBMS solutions utilize SQL, and ODBC (or JDBC) and are widelyaccepted standards within the data management community. SQL is usedfor both data definition and data manipulation while ODBC/JDBC is usedas a Call Level Interface for the C/C++ (or Java) programming language.Database Synchronization FrameworkWhat is necessary is an integrated database synchronization frameworkfor distributed real-time information management that can implement adistributed shared database where fragments of the shared database arekept in the local data caches (i.e. local memory) of the hosts thatcomprise the network – on an as-needed basis.
Essentially a DDS enabled distributed database system that operatesacross an extendable network without the access bottlenecks associatedwith a central server-based model, the framework allows server nodes tokeep complete copies of a database's data store in local memory,reducing the need to move data to and from a disk during operations.
It also permits synchronization of database copies on multiplenodes, creating a distributed database rather than a central server.This enables fast access throughout a distributed system, independentof the number of network nodes. It allows optimized access to dataregardless of its source and provides scalability for the networkutilizing the database. In addition, such a database synchronizationframework virtually eliminates server bottlenecks while providing faulttolerance. With the data available on multiple nodes at any given time,a failure at one node will neither damage data integrity or impede dataaccess at any other system node.
With such a framework, software applications gain reliable, instantaccess across dynamic networks to information that changes inreal-time. Such an architecture uniquely integrates peer-to-peernetworking (DDS) and real-time, in-memory database management systems(DBMS) into a complete solution that manages storage, retrieval, anddistribution of fast changing data in dynamically configuring networkenvironments.
It guarantees continuous availability in real-time of allinformation that is critical to the enterprise. DDS technology isemployed to enable a truly decentralized data structure for distributeddatabase management system (DBMS), while DBMS technology is used toprovide persistence for real-time DDS data.
The power of the model is that embedded applications don't need toknow SQL or OBDC semantics, and enterprise applications aren't forcedto know publish/subscribe semantics. This is a critical point whenbuilding large systems: get the data to where it needs to go in aformat that is native to the developers. Thus the database becomes acombination of the data tables distributed throughout the system. Whena node updates a table by executing an SQL INSERT, UPDATE, or DELETEstatement on the table, the update is proactively pushed to other hoststhat require local access to the same table via real-timepublish-and-subscribe messaging. This architectural approach enablesreal-time replication and synchronization of any number of remote datatables.
From a practical perspective, it is important to recognize that arelational database can now be implemented on multiple computing nodes.Furthermore, within such a data synchronization framework, applicationsview the combination of these distributed data tables as a single'distributed database.' Figure 3,below, illustrates how this approach unifies the global dataspace for both embedded and enterprise applications.
|Figure3. Unifying the Global Data Space.|
DDS-DBMS and DBMS-DDS Integration
DDS-DBMS integration (i.e. DDSQL) technology facilitates the bridgingof embedded real-time systems with enterprise based systems by allowingrelational database table updates to be propagated, in real-time, tothe embedded nodes.
The embedded node utilizes the DDS API and subscribes to a DDS topicassociated with a data table. When the table is altered, either by anenterprise application (via SQL) or an application utilizing the DDSAPI, the local table is updated, and the update information ispublished, via DDS, for consumption by all interested DDS subscribers.This allows information to be seamlessly bridged from an enterpriseapplication to an embedded real-time application.
In addition, the framework allows DDS publications and/orsubscriptions to be captured and logged into the in-memory database, inreal-time, in order to capture and log all incoming or outgoingpublish/subscribe activity. This allows live network traffic capturesto be logged directly into RAM, thus facilitating the ability topost-process and analyze system communication activity. This loggingcapacity provides the primitives for building distributed system debugand trace tools, message auditing, as well as design tools that cancapture, and playback messages in order to re-create the originalsystem activity for the purposes of lab testing and debug.
Figure 4, below , illustratesthe functioning of both table synchronization and DDS-DBMS integrationwithin such a framework.
|Figure4. System Architecture Utilizing RTI Distributed Data ManagementCapabilities.|
DDSQL Building Blocks
To provide application developers further control over the global dataspace, the data synchronization framework also provides two key bridgecomponents: DDS-DBMS and DBMS-DDS (Figure5, below ).
The DDS-DBMS Bridge monitors the applications published data andincoming (subscribed) data. It enables automatic storage of DDS topicdata within the DBMS by mapping DDS topics to tables within the DBMS.As each topic instance is published, the topic instance is likewiseinserted as a row in the table. This bridging provides thefunctionality necessary to support the ability to log bothincoming/outgoing message traffic, in real-time, without suffering theperformance penalty typically associated with disk based databases.
|Figure5. DDSQL Bridge Components.|
The DBMS-DDS Bridge manages the automatic publication of changesmade to tables in the DBMS. It will also apply changes received via DDSto tables in the DBMS. This bridge allows table changes, whether madeby an SQL enterprise application, or by a DDS enabled application, tobe 'pushed' to a pure DDS subscriber, in real-time. This bridgecomponent provides table event bridging from the enterprise applicationto the embedded application.
Each of these bridge elements contain a Publication and Subscriptioncomponent with the four components for DDS-DBMS Publication andSubscription and DBMS-DDS Publication and subscription. DDS-DBMSPublication. This component consists of a DDS Data Writer and DBMSPropagator, which is a collection of functionality that disseminatestopic instances as well as propagating the outgoing DDS samples to aDBMS table. The propagator performs the DDS-DBMS data-type conversionautomatically for the user data types and creates the associateddatabase table, if it doesn't already exist.
It's important to note that node-to-node message latency is notnegatively affected with the presence of the Propagator functionalitydue to the fact that the topic instance is disseminated, via the DDSData Writer, prior to it being sent to the DBMS Propagator. This bridgecomponent facilitates the local logging of DDS publications destinedfor remote subscribing applications.
DDS-DBMSSubscription. This component includes a 'DBMS Propagator' whichis a collection of functionality that can propagate an incoming DDSsample to a table managed within the DBMS. The propagator provides thenecessary functionality to perform the DDS-DBMS data conversion, andcreate the table if it doesn't yet exist. It also includes a 'DBMSChange Filter,' which filters out samples from DDS that have alreadybeen applied to the table to be updated.
The DDS Data Reader will receive the incoming publication allowingthe subscribing application to process the data at the applicationlevel. The publication is then propagated to the DBMS before theread/take function returns. The DBMS Propagator performs the DDS toDBMS data conversion from the DDS data representation to the DBMS tabledata representation, and inserts the row in the database for eachsample of a topic instance.
If the row already exists, the row is updated. If historyconfiguration is employed, then each publication will be stored as aseparate row in the table. The DBMS Change Filter is not used in thisscenario. It only plays a role when the DBMS-DDS Bridge is active. Thisbridge component facilitates the logging of DDS publications fromremote applications.
DBMS-DDSPublication. This component, includes a 'DBMS Monitor','DBMS Change Filter', and DDS Data Writer. The DBMS Monitor watches fortable changes within the DBMS. When a change in a DBMS table isdetected, the DBMS Monitor forwards the information to the ChangeFilter. The filter is used to filter out those changes that havealready been distributed via DDS. The altered table row is thendistributed using the DDS Data Writer. The DDS Data Writer does not useany type-specific code; instead, it performs a DBMS-DDS data conversionusing the table schema to serialize the row contents directly into theDDS wire format.
Thus an enterprise application can change data within a DBMS table,and ultimately have the table update information published, via a DDSData Writer to any interested subscribing applications, whether theyare other enterprise applications, or real-time embedded systems. As aresult, this bridge component facilitates the notification and updateof table alterations to a subscribing DDS based application, thusbridging data from the enterprise application to the embedded system.
DBMS-DDSSubscription . This component includes a DDS Data Reader, 'DBMSChange Filter', and a 'DBMS Monitor.' The DDS Data Reader will beassociated with a table and will subscribe to associated DDS topics tocapture. The DDS Data Reader does not use any type specific code;instead, it performs a DDS to DBMS data conversion using the tableschema to deserialize the received samples from the DDS wire formatdirectly.
The received sample is applied to the table row as an update by theDBMS Propagator. The DBMS Change Filter mechanism is the same as theone utilized within the DDS-DBMS Subscription component and filters outsamples received via DDS that have already been applied to the DBMStable.
So when the DDS Data Reader receives the sample from a remote DDSData Writer as a result of a table update, the DBMS Change Filterfilters out the samples that have already been applied to the DBMSbefore passing it on to the DBMS Propagator, which sends the incomingDDS topic instance to the appropriate table within the DBMS. Once thetable is updated, SQL based application can now access the changeddata.
With such a data synchronization framework, developers now have achoice when accessing the global data space. Updates made via the SQLAPI will be visible to DDS user applications, and updates made via theDDS API will be visible to DBMS user applications.
These mechanisms offer a unique combination of features: storage ofDDS data in DBMS tables; publication of DBMS data via DDS; mappingbetween IDL and SQL data types; mapping Between DDS data samples andDBMS table updates; history; and feedback cancellation.
By allowing automatic storage of DDS data into DBMS tables, changesmade via the DDS API are propagated to the associated DBMS, as are thechanges detected by DDS. Once the data is propagated to the DBMS table,it can be accessed by a SQL user application via the SQL API.
With the ability to do automatic publication of changes in specifiedDBMS tables, changes made via the SQL API (i.e. INSERT and UPDATEstatements) will be published into the network via DDS. SQL querieswill report the user data changes received from the network via DDS.
The automatic mapping between DDS data type representation and DBMSschema representation makes it possible to directly translate a DBMStable record to the DDS wire format representation and vice-versa.
Because the DDS type metadata specified in an Interface DescriptionFile (IDL) is mapped to a table schema in a DBMS, a DDS topiccorresponds to a table in the DBMS, which may be named after the DDStopic name.
Since DDSQL can automatically keep track of the history samples of aDDS topic instance, the number of history samples to store for aninstance can be specified as a configuration parameter of the DDS-DBMSBridge. Normally, a topic instance is mapped to a single row in theassociated table, but when history is enabled, each sample of a topicinstance will be stored as a separate row.
Finally, when data in the DBMS is changed, DDSQL automaticallypublishes the change via DDS Publisher. However, since changes made viathe DDS API are propagated to the DBMS, 'acoustic' feedback may occur.
DDSQL eliminates this feedback by utilizing a DDS Change Filter anda DBMS Change Filter. Changes received via DDS that have already beenapplied to the associated DBMS table are automatically filtered out, asare changes in the DBMS that have already been distributed via DDS.
Mark A. Hamilton is Senior Field Application Engineer at Real-Time Innovations .