Understanding backplane, chip-to-chip techHigh-speed interconnects may be used to link chips, boards or systems. The distance between target devices is the major difference.
Chip-to-chip interconnects normally span less than 20 inches, are driven by high-volume chip shipments and are therefore fairly well-standardized. Intel, for example, is driving PCI Express as the next chip-to-chip interconnect in the PC. Similarly, AMD has driven HyperTransport as a standard within its CPU/chip set architecture. The large volume of PCs enables low-cost chip-to-chip interconnects, which are then deployed in non-PC applications.
Ethernet is another high-volume application, driving Xaui as an alternative standard for connecting chips. An OEM or silicon vendor can use one of those three interconnect specifications to meet the technical and cost requirements for most chip-to-chip applications.
Typically, OEMs develop proprietary backplanes to connect boards. In networking systems these boards are called line cards and in computing and storage they are called blades. For further scalability, the resulting system may be connected to another system using a copper or fiber cable.
When these systems use a proprietary architecture, there is little need to standardize on a backplane or intersystem interconnect. But in the past couple of years a few new developments are prompting standardization for both backplane and system interconnects.
First, networking OEMs are relying more on merchant silicon after cutting back on internal ASIC programs. Second, emerging interconnect standards enable multiple sources and lower-cost silicon and boards. For example, new applications like blade servers may use blades from multiple vendors. Consequently, the industry is motivated to develop specifications for these applications. Because these specs are unlikely to address all market requirements, some OEMs will continue to develop and use proprietary interconnect products.
For intersystem connections, the IEEE has developed the 10GBase-CX4 specification, which is Xaui over an Infiniband cable. The 10GBase-CX4 is a combination of two proven technologies: Xaui from 10-Gigabit Ethernet and the twin axial copper cable from Infiniband. The 10Gbase-CX4 specification places the minimum required distance between systems at 15 meters, enough for most racks. Alternatively, instead of 10GBase-CX4, OEMs may interconnect systems using Infiniband, which is gaining momentum.Application requirements
An increasingly broad set of systems uses backplanes to connect boards including telecom switches, multiservice provisioning platforms, add/drop multiplexers, digital cross connects, storage switches, routers, embedded platforms, multiprocessor systems and blade servers. These systems typically provide modularity through the addition of line cards, switch cards and services blades to a single high-speed backplane. In most of those platforms the backplane uses a serial interconnect operating at rates greater than 1 Gbit/second.
Modular communication switches are deployed in local, metropolitan, wide-area or storage-area networks. A typical switch uses transceivers in its line cards, fabric cards and services cards. A transceiver may be a standalone product or integrated into the switch fabric.
The blade server applies a modular architecture to the traditional PC configuration. Like communication switches, blade servers exploit the modular architecture to add greater performance and more services. For example, users can add to a server blade system more host-processor blades, I/O or storage blades as needed.
Communication and computing systems share some common backplane requirements. Because of legacy space constraints, the size of the backplane-based chassis is around 21 inches. The backplane interconnect is often specified to operate over distances of up to 40 inches with two connectors. To keep the cost down, OEMs prefer to use high-volume connectors and FR4 material to build the printed-circuit board.
Backplane trace lengths may be different, introducing different delays and requiring skew compensation. Most modular systems will block data at some point, such as the switch card or the egress line-card port that may be the target of more traffic than it can handle. To address this congestion, backplanes are designed to perform flow control and traffic management.
However, backplane-based comms and computer systems also have unique requirements. Telecom systems are required to supply redundancy, failover mechanisms and bit error rates (BER) as low as 10-15, which in turn drives a backplane BER of better than 10-17. Ethernet has traditionally offered a BER of 10-12, requiring technologies like forward error correction to enhance Ethernet for telecom applications.
By contrast, storage switches are often defined by their need for very low latency. For blade-server applications, interprocessor communication may also require low latency.Standards
Because there is no high-volume platform to drive a backplane standard, there are several competing efforts to establish a specification as the standard.
It is unlikely that one specification will address all the requirements for networking and computing applications.
Existing industry or vendor backplane specifications include Advanced Switching, Ethernet, Infiniband, the Optical Internetworking Forum's common electrical interface and RapidIO. Many of these backplane specs have been adopted by the specification for Advanced Telecom Computing Architecture (ATCA) chassis developed by the PCI Industrial Computer Manufacturers Group (PICMG).
ATCA 3.0 defines a backplane-based system using switch-fabric and serial-link technology that can scale from 2.5 Gbits/s to 2.5 Tbits/s. That scalability enables systems ranging from a traditional switched-circuit switch to a DSLAM to a core router. The base ATCA specification defines the physical characteristics, such as chassis form factor, thermal management, power management and electrical signaling, but not the fabric protocol. It defines the dimensions and mechanical specifications of the shelf for central-office and data-center installations. For example, the shelf can be populated with up to 16 boards, each 13 x 11 inches. Up to three shelves can be stacked, dissipating around 10 kilowatts of power. Management signals are defined for functions like power management, cooling, watchdog timers and failure notification.
Because there are competing fabric-standard activities, PICMG allowed different electrical signaling and data protocols to be layered on top of ATCA 3.0. The protocols may be used for different applications and will compete directly against each other.
StarFabric is a serial PCI technology developed by Stargen and will likely be succeeded by Advanced Switching. Originally developed for the data center, Infiniband has been adapted to fit the ATCA format. Compared with Advanced Switching (AS) and serial RapidIO (RIO), Infiniband products are shipping today and the specification has an established software infrastructure for interprocessor communications. Compared with Gigabit Ethernet and Fibre Channel, Infiniband costs more but provides better performance and support software for high-performance computing.
Infiniband is currently deployed in computer clusters and should find a home in blade servers. Vendors that are expected to ship products supporting Infiniband-based systems include HP, IBM, Oracle, Sun and several other OEMs. With an infrastructure that includes silicon, switches, servers and software, the Infiniband community is focusing on reducing the cost of implementation, which should help increase volume shipments.
AS and RIO are specifically designed to address communication requirements for backplanes. Each specification defines quality-of-service, flow control, traffic priorities and congestion management.
Led by a group consisting of Agere, Alcatel, Huawei, Intel, Siemens, Vitesse and Xilinx, AS adds switch-fabric extensions to the physical layer and link layer defined in the PCI Express specification. Essentially, AS encapsulates different types of protocols and transports through an AS fabric. Currently under development at startups and Vitesse Semiconductor Corp., the first silicon products to implement AS should sample in early 2005 and systems could follow by late 2006. A few undisclosed OEMs are working to develop switching platforms.
Championed by leading DSP vendors Freescale (formerly Motorola), Analog Devices and Texas Instruments, the RapidIO Trade Association includes such OEM vendors as Alcatel, Ericsson, EMC, IBM and Lucent. RIO is a three-layer architecture consisting of transport, logical and physical layers.
Serial RapidIO (sRIO) links are either 1 bit or 4 bits wide and can operate at 1.25 GHz, 2.5 GHz and 3.125 GHz. This serial mode is an extension of the parallel RIO interconnect, which is currently shipping. In September, Tundra, the leading sRIO switch vendor, demonstrated sRIO operation.
TI, Freescale and Analog Devices are all developing products that use RIO interconnects. Fueled by real demand from the wireless-infrastructure OEMs, the first sRIO products are expected in early 2005. In 2006, these systems should be in field trials leading to volume production.
Ethernet continues to be a popular backplane interconnect for computing, communications and embedded applications, due to its low cost and wide availability. Because it was available earlier, Ethernet ships more volume than Infiniband, Myrinet (a proprietary for computer clusters), RIO and StarFabric. OEMs, however, must deal with Ethernet's limitations-such as its relatively high latency. So OEMs often develop software extensions to adapt Ethernet to their system.
In high-performance computing and switches with a large capacity (i.e., greater than 80 Gbits/s), Gigabit Ethernet lacks the performance and traffic management capabilities it needs to be competitive. The IEEE 802.3ap work group is focusing on these limitations by adapting 10-Gigabit Ethernet for the backplane.
At first, the 10G backplane will be 4 x 3.125-Gbit/s lanes and later 10-Gbit/s serial lanes. Although 10G backplanes have the potential to be the dominant interconnect, the specification work is in early development with discussions of requirements such as bit error rate, coding schemes and board material.Market trends
RHK forecasts the ATCA market to grow to as much as 70,000 units by 2007. The total available market for the standard protocols, however, should be greater than this since OEMs will also use proprietary chassis. Those chassis may use any of the fabric protocols discussed above-independent of ATCA 3.0.
The blade server market is in its early stages and is expected to grow significantly in the next few years. International Data Corp. (Framingham, Mass.) estimates the blade server market will grow from 400,000 units in 2004 to 2.75 million by 2008. Because of this opportunity, many vendors may choose to focus on the blade server market.
A standard approach enables third parties to develop blades for the servers and increase options for the user. Too much standardization, however, opens the high-value server market to low-cost manufacturers from China and Taiwan. Major OEMs, therefore, may try to blend some standards with proprietary designs. For example, to encourage third-party blade development, IBM Corp. recently opened up the architecture for its proprietary blade center.
The silicon opportunities among backplanes may be segmented into standalone transceivers, FPGAs and integrated chips like switch fabrics. The transceiver vendors target better performance and lower power than that available in standard implementations. For example, 3.125-Gbit/s transceivers shipped well before products that integrated multiple transceivers and offered standard protocols. Today, transceiver vendors are offering 6.25-Gbit/s or greater-rate products, while the standard protocols are focused at 3.125 Gbits/s. Standalone transceivers are attractive for OEMs developing proprietary backplanes or retrofitting an older backplane to increase system capacity.
With 10-Gbit/s serial links available on leading FPGAs, these programmable parts could erode the market for standalone transceivers. An FPGA with integrated transceivers and uncommitted arrays is often attractive for OEMs that have unique requirements. In the past, the performance of FPGAs has limited their deployment and kept the market open for standalone vendors. With increased learning and performance improvements, this situation may change.
FPGAs, however, are not always able to meet performance, feature and latency requirements for standard backplane applications. So application-specific standard products may provide lower cost and better performance for traffic management and buffering requirements for backplane protocols. Consequently, most solutions for standard protocols such as Infiniband, AS and RIO should use highly integrated chips.Conclusions
Despite some clear winners like Ethernet and Infiniband, the battle for the dominant backplane standard is likely to continue for the next few years. That's because there is no single set of requirements for backplanes, and therefore it's unlikely that a single standard will meet the backplane needs for all applications. In general, low-end applications are driven by cost while high-end applications are driven by performance.
In the next two to three years, lower cost and broader availability could drive standard solutions for low-end networking applications. With little need for interoperability, high-end networking applications may continue to use proprietary solutions for the next three to five years. In computing, blade server applications are driven by cost and interoperability, while high-performance computing applications are driven by performance.
Linear extrapolation would point to Gigabit Ethernet and PCI Express as the popular standards for backplanes. Although each of these is proven, low-cost and available from multiple suppliers, OEMs may need to add significant software.
With a unique value proposition, Infiniband has a window of opportunity for the next three to five years. After that, it should continue but at lower volumes.
Although backplane versions of 10-Gigabit Ethernet could consolidate requirements of multiple segments, it started later than AS or RIO and requires significantly more work. At about the same stage of product development, AS and sRIO adoption will hinge on product execution. While several vendors and OEMs are developing AS-based products, AS still needs to establish itself by garnering support from a high-volume OEM. For its part, RIO appears to have garnered a critical mass of OEMs in wireless and embedded-DSP applications.
Jag Bolaria is senior analyst at The Linley Group. You can contact him at firstname.lastname@example.org.