High Level Architecture (HLA) has emerged as a widely adoptedmiddleware standard for simulator-to-simulator connectivity and datasharing, especially in the defense system market. <>
But until recently there was no viable open standard middlewaresolution able to address effectively the even more demandingrequirements of real-time data distribution within the individualsimulator where it is necessary to produce and control animated images,sound reproduction, and device feedback in a manner as realistic andresponsive as the real world, and chasing this ideal has constantlypushed the industry forward in many different ways.
Individual simulators have adopted techniques such asmulti-processor systems, high performance graphics cards anddistributed sensors and actuators to approach the desired objective.
Increasingly such systems consist of many different high performanceprocessing sub-system units that need to communicate in real-time.Although COTS-based open-standard hardware such as VME and Unix/PCsystems has been commonly used, until recently the only way to providesoftware connectivity between applications has been with proprietarysystems solutions.
And as such systems get larger and more distributed, the performanceissues of latency, determinism and system bottlenecks are becoming evermore important in maintaining the simulation experience.
A further issue is the critical need to ensure investment inapplication software can be re-used effectively across multipleprojects. The combination of these vital but difficult issues isdriving the need for more formalized software structures and a growingmove towards COTS middleware adoption.
These issues have to be tackled at two levels: both inside (betweenthe system components of the simulator) and outside (for distributedsimulator to simulator connectivity) such systems.
This article will examine the problem in greater detail, andintroduce an integrated design approach based on open standardsmiddleware that addresses these issues at both levels.
One of the key issues in achieving the best performance in a simulatordesign is optimization of the 'man-in-the-loop' function. This controlloop has to respond in real time to the changing environment andoperator input, possibly within a network of simulators.
The demands on the network throughput will vary, from a dynamicsmodel running possibly as fast as 1000Hz exchanging information with anI/O device at up to 100Hz, with a resultant potential networkthroughput requirement of 10ms per data packet; or from the dynamicsmodel to an image display device that needs data input at anything upto 60-80 Hz to match the screen refresh.
If data appears outside of this time interval it will have to beignored; a common response is to reduce the simulation fidelity toallow state recovery, a situation that simulator developers always seekto avoid.
As mentioned previously, the issue of simulator-to-simulator datasharing has already been addressed by the development of an openstandard middleware for linking simulators promoted by the USDepartment of Defense and implemented by the SimulationInteroperability Standards Organization (SISO).
The emergence of HLA
This is the High Level Architecture (HLA) and its applicationprogramming interface, which is already used by a number of simulatordevelopers to connect systems together.
HLA is a publish/subscribe architecture where elements publish dataonto the bus to be picked up by other units that subscribe to thatdata, commonly referred to as 'federated data'. This allows a system tobe distributed, avoiding the bottlenecks of a client-serverarchitecture and allowing the system to be more easily scalable.
The HLA Run Time Infrastructure (HLA-RTI) is most commonly used tolink simulator systems together in a wide area network, and in somecases it is even being seen as a way to link the processing elementswithin individual simulators.
This approach has the advantage of avoiding having to translate thedata from one format to another. Unfortunately, the HLA-RTI wasn'tdesigned to provide the speed and detailed control of real-timeperformance required by systems that need consistently low latenciesand deterministic responses.
As a result, simulator developers often end up creating proprietaryprotocols and methodologies to enable data distribution with adequatereal-time performance within the simulator.
However, there is another open standard that also uses apublish/subscribe architecture that provides a much better fit for ourreal-time requirements. Data Distribution Service (DDS) is an openstandard data oriented middleware optimized for hard real time systemswith the low latency and quality of service capabilities to provide therequired speed and level of control of real-time performance built in.Its similar structure allows it to work very well alongside, and inco-operation with, the existing HLA standard.
What is DDS?
The Data Distribution Service (DDS) is a newly adopted openspecification from the Object Management Group (OMG), a group of around800 members, for data-centric publish/subscribe communications in realtime systems. These include applications in aerospace and defense,distributed simulation, industrial automation, distributed control,robotics, telecom, and networked consumer electronics.
The publish-subscribe (P-S) model connects anonymous informationproducers (publishers) with information consumers (subscribers). Theoverall distributed application is composed of processes, each runningin a separate address space and even on different computers. The APIand Quality of Service (QoS) are chosen to balance predictablereal-time behavior and implementation efficiency/performance.
The specification provides a platform independent model (PIM) thatcan then be mapped into a variety of platform specific models (PSMs)and programming languages.
DDS draws upon common practice in existing publish/subscribearchitectures including HLA, OMG event notification service, JavaMessaging Service (JMS), and experience with Real-Time Innovations NDDSproduct. Many enhancements have been specifically designed to providethe higher latency and determinism required by real time distributedsystems.
Like HLA, DDS uses a publish/subscribe model where datadissemination between producers and consumers may be from one-to-one,one-to-many, many-to-one, or many-to-many. The communication model isdecentralized, with publishers and subscribers loosely coupled andhaving no knowledge of each other.
This means that publishers and subscribers can join and leavedynamically, providing an ideal platform for a flexible and scalablesimulation system architecture. This data-centric development approach,enabled through a standard API, also enables modularization ofsimulator development and thus the potential for significantapplication code re-use.
DDS builds on the definition of HLA to add support for objectmodelling, and ownership management. It addresses a number ofperformance-related issues not dealt with by HLA, such as a rich setQuality of Service (QoS) policies, a strongly typed data model, andsupport for state propagation including coherent and ordered datadistribution.
In particular, the QoS capability of DDS allows the designers tomaintain levels of priority for data ensuring a fine level of controlwhich ensures the minimum latency requirements are met across thedistributed system.
An enhanced version of DDS, called NDDS 4.0, is designed to maximizebenefits of the deterministic data environment provided by the standard(see Figure 1, below ). Thelatest version provides pluggable transports that support any media,from wireless to switched fabrics with programmable parameters,programmable Quality of Service (QoS) and customizable data types.
|Figure1: The NDDS 4.0 architecture|
Pluggable transports have been incorporated into RTI'simplementation of the DDS standard. When combined with the QoSmechanisms of DDS this capability provides performance “tuneability” ofdata paths to the underlying simulator connectivity fabric. This alsomeans that any transport medium can be used, from the widely usedTCP/IP protocols to a switched fabric for more high performanceapplications.
NDDS 4.0 also uses direct end-to-end messaging, which eliminatescontext-switch delays. Combined with the support for messageprioritization, this ensures a predictable and low level of datalatency, while pre-allocated memory prevents allocation andfragmentation delays that would otherwise contribute to a lack ofdeterministic operation in the responsiveness of the sub-systems.
Enhancements to the memory handling further enhance thedeterministic performance of NDDS, as there are no shared threads ormemory that requires locking. It also provides dedicated buffers toprevent the different processes having corrupted shared memory.
Another advantage of NDDS 4.0 for the simulator developer is thatthere is no server process to crash, and applications do not shareaddress space through the middleware, so operating system processes canbe isolated and this fully protects all applications.
This open-standards approach is already in use by several simulatordevelopers, including CAE for flight simulators, where NDDS is used tolink subsystems via high speed IEE1394 Firewire links in its SimXXIproduct line. Simulator developer Nextel Engineering Systems in Madrid,Spain, is also using NDDS to link together the different elementswithin its simulator architecture in a way that did not compromiseperformance or reduce scalability for its Simware kernel.
Dr. Rajive Joshi is a Principal Engineer at Real-TimeInnovations, Inc. specializing in the design of distributed andreal-time systems, emphasizing object-oriented and component-basedsoftware architecture.