On the road to 2028: Embedded needs standard IPCFrom telecom infrastructure equipment and automobiles, to medical instruments and industrial control systems, today's distributed systems are running increasingly sophisticated programs, spread across a larger number of processors, operating systems, and interconnects. One of the greatest challenges facing developers of distributed applications is creating the overarching communications framework needed to integrate platform components and provide services to applications throughput the network.
Traditionally, in order to maximize performance and efficiency, designers of distributed systems have used a mix of IPC (Inter-process communication) technologies to handle local (processes on the same node) and remote (processes on different nodes) IPC. Most designers, for example, use the operating system's native IPC services to maximize performance for local IPC, while using standard, more versatile IPC technologies like TCP to handle systemwide remote interprocess communications, which occur less frequently and have more forgiving latency and throughput requirements.
What the embedded industry needs is a standard, open source, high-performance IPC that can be used systemwide for both local and remote IPC. Using a single systemwide IPC to link application processes and platform software components would have a number of advantages. A single IPC will:
- Significantly reduce time to market, not only for initial product development and integration, but for future upgrades over multiple generations of product.
- Make systems more scalable, enabling designers to reconfigure their applications, add/delete nodes, and relocate software components without having to modify the application code.
- Make applications and system software easier to migrate to new hardware, enabling equipment makers to reuse software across multiple generations of product.
- Be easier to learn, thereby expediting development.
- Make application code easier to read and understand, thereby reducing complexity and making it easier for equipment makers and support teams to fix problems and perform upgrades throughout the life of the system.
The ideal IPC should utilize a high-performance, lightweight direct messaging passing technology that provides the performance and versatility needed to satisfy local and remote IPC requirements across all CPU (including multicore devices), operating-system, and interconnect boundaries. It should provide dependable, high-speed transport for both the control and data plane over reliable as well as unreliable interconnects and protocols. It should support the encapsulation of other bearer protocols--such as TCP, UDP, and SCTP--for data transport. It should provide supervision and failure reporting for designated connections and built-in support for redundant links (both for physical CPU interconnects and logical connections between endpoints). It should scale from DSPs and microcontrollers to 64-bit CPUs. And it should support any distributed system topology, from a single processor on a single blade, to large networks with complex cluster topologies deployed on hundreds of processors in a multirack system.