Design Con 2015

Using PCI Express as a fabric for interconnect clustering

Miguel Rodriguez

March 07, 2011

Miguel RodriguezMarch 07, 2011

With today’s demanding backplane requirements, the era of Gigabit Ethernet (GbE) as the de-facto interconnect for that platform is coming to a close.  As such, a number of interconnect technologies are vying to replace GbE, with the top contenders being 10 Gigabit Ethernet (10GbE), InfiniBand (IB) and PCI Express (PCIe).  Though a clear winner has not yet emerged, PCIe, with its advanced capabilities, makes a strong case for becoming the ideal backplane interconnect solution.

Over the last decade, PCIe has evolved from a parallel bus functioning merely as the transport for a single host, to IO devices wherein one host manages a set of IO devices, to a point-to-point high-speed serial interconnect with advanced features capable of taking on challenging backplane demands. 

Today, PCIe can easily support an efficient host-to-host communication model as well as other configurations that include IO resource sharing across multiple hosts. Such features lead to a significant reduction in systems’ cost and complexity. 

In addition, mainstream processor companies, such as Intel, have been integrating PCIe -- not just in their chipsets, but also as an integral part of the core silicon.  With such inherent advantages, PCIe can indeed fill the mantle of being an ideal backplane interconnect.

A fundamental backplane requirement (Figure 1 below) is obviously the need for a powerful fabric delivering high throughput (>10Gbps) and low latency (< 5µs).  This fabric must also support backplane distances not only for deploying bladed environments (e.g. blade servers) but also for cabling across multiple blade chasses or potentially supporting rack-mounted servers.

 

Figure 1. Traditional backplane for supporting IPC, LAN and SAN connectivity.

From a functional point of view, the backplane must support inter-processor communication (IPC) as well as access to an external local area network (LAN) and a storage area network (SAN).  Today, traditional approaches use three different IO interfaces on each server node to accomplish this.  Consequently, three different backplane interconnects are required for supporting the IPC, LAN and SAN communication model in the backplane. 

Figure 1 above shows a traditional backplane where a server uses a GbE interface for LAN connectivity, a Fiber Channel (FC) card for SAN connectivity and 10GigE- or IB-based card for IPC connectivity. 

Clearly, this is neither the optimal nor preferred model from both a cost perspective and complexity level. The need for a unified backplane – one that supports all three types of traffic wherein each server connected to the backplane uses a single IO interface instead of three – isn’t just obvious but necessary.

The interconnect technologies in discussion here – PCIe, 10GbE and IB -- all can make the claim for a unified backplane, each providing a feature set for supporting that application. PCIe, however, appears to deliver the array of features optimal for a unified backplane.

< Previous
Page 1 of 4
Next >

Loading comments...

Parts Search Datasheets.com

KNOWLEDGE CENTER