From telecom infrastructure equipment and automobiles, to medical instruments and industrial control systems, today's distributed systems are running increasingly sophisticated programs, spread across a larger number of processors, operating systems, and interconnects. One of the greatest challenges facing developers of distributed applications is creating the overarching communications framework needed to integrate platform components and provide services to applications throughput the network.
Traditionally, in order to maximize performance and efficiency, designers of distributed systems have used a mix of IPC (Inter-process communication) technologies to handle local (processes on the same node) and remote (processes on different nodes) IPC. Most designers, for example, use the operating system's native IPC services to maximize performance for local IPC, while using standard, more versatile IPC technologies like TCP to handle systemwide remote interprocess communications, which occur less frequently and have more forgiving latency and throughput requirements.
What the embedded industry needs is a standard, open source, high-performance IPC that can be used systemwide for both local and remote IPC. Using a single systemwide IPC to link application processes and platform software components would have a number of advantages. A single IPC will:
- Significantly reduce time to market, not only for initial product development and integration, but for future upgrades over multiple generations of product.
- Make systems more scalable, enabling designers to reconfigure their applications, add/delete nodes, and relocate software components without having to modify the application code.
- Make applications and system software easier to migrate to new hardware, enabling equipment makers to reuse software across multiple generations of product.
- Be easier to learn, thereby expediting development.
- Make application code easier to read and understand, thereby reducing complexity and making it easier for equipment makers and support teams to fix problems and perform upgrades throughout the life of the system.
The ideal IPC should utilize a high-performance, lightweight direct messaging passing technology that provides the performance and versatility needed to satisfy local and remote IPC requirements across all CPU (including multicore devices), operating-system, and interconnect boundaries. It should provide dependable, high-speed transport for both the control and data plane over reliable as well as unreliable interconnects and protocols. It should support the encapsulation of other bearer protocols–such as TCP, UDP, and SCTP–for data transport. It should provide supervision and failure reporting for designated connections and built-in support for redundant links (both for physical CPU interconnects and logical connections between endpoints). It should scale from DSPs and microcontrollers to 64-bit CPUs. And it should support any distributed system topology, from a single processor on a single blade, to large networks with complex cluster topologies deployed on hundreds of processors in a multirack system.Transparent facilitates scalability and portability
To maximize scalability and portability, the IPC services should provide transparency. In other words, the IPC services must be independent of the underlying hardware, operating system, physical interconnect, and network topology. This transparency enables application processes distributed across multiple operating systems, processors, and interconnects to communicate in a seamless fashion, as if they were residing on a single processor under a single operating system.
Transparent IPC services make complex distributed systems easier to partition, debug, and maintain, and simplifies the integration of third-party software components. When combined with the ability to dynamically discover communication endpoints, this transparency gives developers the freedom to locate applications on any node in the system, and change the configuration at run time. The resulting system provides a high degree of flexibility, enabling developers and service providers to dynamically change and scale the system configuration, redistribute applications across multiple blades, and upgrade the hardware with minimal changes to the application code.
Another way that transparent, systemwide IPC services simplify distributed design is by making it easier to combine multiple operating systems. Some network equipment providers, for example, may elect to use Linux on one set of blades to host high-level management and supervisory control applications, while using an RTOS like Enea OSE or Enea OSEck on other blades to host real-time control and DSP-based media processing applications.
Direct message passing
To maximize performance, the IPC services should utilize direct message passing, which enables application processes to communicate directly with each other on a peer to peer basis, without having to synchronize through intermediate mechanisms such as mailboxes, semaphores/mutexes, event flags, Unix-style signals, or even sockets. This direct approach also simplifies communications and facilitates logical process separation, thereby enhancing reliability and simplifying fault recovery, particularly in distributed systems using multicore devices and complex network topologies.
As equipment makers gravitate toward COTS technology in order to reduce cost and time to market, they're insisting on standard, open interfaces that make COTS technology easy to access and integrate. Open source IPC solutions like Enea LINX provide a standard, systemwide IPC framework that makes it easy for equipment makers to develop distributed, heterogeneous platform software and applications utilizing best-of-breed components from multiple vendors. LINX is also open source (available for Linux free of charge at https://sourceforge.net/projects/linx), further simplifying multi-vendor integration and making this best-of-breed technology available to the broader embedded systems community.
Mike Christofferson , is the director of product management at Enea.