One of the most critical aspects of implementing the emerging AdvancedTCA (ATCA) architecture is the ability of high-performance blades to communicate with each other so that vast quantities of data can be moved from board to board. The switching fabric within a shelf is the key to making this communication happen.
Switch fabrics today include a variety of technologies and implementations, both standardized and proprietary. In this article, however, we will look at practical ATCA switch fabric implementations for handling transit and transformation applications. We'll also show how the advanced switching interconnect will play a key role in meeting the demands of ATCA applications.
Topologies and Fabrics
The ATCA specification defines two separate interconnect fabrics: the base interface and fabric interface. The base interface accommodates essential interoperability over a switched fabric supporting 10/100/1000-Mbit/s Ethernet in a dual-star configuration. The fabric interface, on the other hand, allows for full mesh interconnect architectures, high-speed switched fabric architectures, or combinations of both.
Figure 1 illustrates an ATCA backplane implementing the base and fabric interfaces. Designed for an EIA 19-in. rack, this backplane can support up to 14 ATCA-compliant slots, with standard 16-slot shelf designs possible as well. Implementation of specific signaling technologies on the fabric interface is defined in separate “dotted” specifications of the PICMG 3.0 series. For example, Ethernet fabric interconnects are defined in PICMG 3.1 specification.
The design choice among available topologies and interconnect protocols is governed by the targeted applications and their bandwidth requirements, by industry acceptance of a given interconnect technology, and by overall cost considerations. Generally, the cost of backplane increases with the amount of bandwidth provided. However, the ability to provide a common backplane for a range of applications while allowing interconnect technology upgrades via board-set changes (rather than complete shelf forklift) can result in a net cost savings.
Conceptually, the ATCA backplane was envisioned to support a wide range of communications applications today, and over the next decade. The bandwidth of the fabric backplane is a function of the access aggregate bandwidth per port or node and the total offered bandwidth of the backplane.
The fabric interface defines eight differential pairs per link suitable for enabling higher-speed signaling technologies. At the physical layer, each link of the fabric interface is capable of supporting speeds of at least 10 Gbit/s, while the maximum aggregate throughput of ATCA backplanes will reach several terabits per second as backplane technologies evolve.
Initially, ATCA platforms will appear in emerging wireless applications in edge and service areas of the network, where traffic is largely packet based. Wireless applications such as radio network controllers (RNCs), serving GPRS support nodes (SGSNs), and home location registers (HLRs) currently require several OC-3/OC-12 links per shelf, but will trend toward higher throughputs over time as data-enabled wireless handsets are becoming more prevalent.
The most I/O-intensive applications are found in multi-service switches and edge routers. A typical deployment might be two to four OC-192 ports and eight to sixteen OC-48 (or 32 to 64 OC-12) ports. The presence of OC-192 ports requires the access rate per port to be at least 10 Gbit/s plus some speed-up to account for backplane protocol overheads (Figure 2)
It is obvious that when using 10 Gigabit Ethernet as the fabric, due to pin limitations, it simply may not be possible to employ a dual-star topology while supporting the 15 to 20 Gbit/s needed per slot. However, as faster serializer/deserializer (serdes) devices emerge along with more exotic backplane circuit board dielectric materials and connectors which enable higher signaling rates, higher performance can be achieved with simpler topologies like the base fabric.
ATCA Application Usage Models
Generally speaking, there are two different usage models which relate to the common network element models of ATCA platforms typically found within communications architectures and their resulting fabric usages. These two models refer to routing applications which are also called transit applications, and application processing models which are referred to as transformation applications.
The routing/transit model essentially puts the ATCA chassis in the center of two networks and routes the packets from the ingress ports according to their requested destination to the egress ports. The application processing model generally places the ATCA chassis closer to the network edge within wireless or even enterprise networks focusing on network safety and security applications with high b/w and computing requirements. Let's look at each of these models in more detail, starting with the routing application.
1. Routing (Transit) Applications
Routing applications, unlike processing apps, refer to more of a network core focus and have their own set of characteristics which affect fabric backplane bandwidth, compute requirements and overall platform performance (Figure 3) .
In a routing application, the ATCA platform acts as a router/switch found within a network core, usually at the edge of an internal network with ingress/egress connections to external LAN/MAN/WAN networks for sourcing / forwarding packets. The ATCA line cards act as packet inspection platforms receiving packets, processing them based on in memory hash or look up routing tables, then forwarding the packets to their next, appropriate destination in the network via the ATCA in-chassis switch and egress line card connected to the external networks LAN/MAN/WAN.
Compute and bandwidth requirements are significant for these packet processing and forwarding functions, but highly scalable on ATCA platforms based on typical network usage patterns and the ATCA architecture's ability to easily grow to meet demand via COTS (common off the shelf) modular building blocks. The use of an ATCA platform allows designers to future-proof network elements by buying components they need today and scaling their network compute, bandwidth, and application needs as their networks grow and mature.
Protocol translation and table updates are achieved through the use of management packets on the control plane in the transit chassis (shown in Figure 3 above). Occasionally an application is designed to encapsulate management data overhead which simplifies protocol requirements but increases system bandwidth and overhead requirements somewhat.
Within an ATCA chassis, line card bandwidth requirements vary significantly accounting for the need to support both asymmetric and symmetric data rates. These two models are application dependent and rely on the requirements of the specific network element and the networks attached to it. Typically in a symmetric application, line rates flowing to/from the ATCA chassis are approximately equal. In these applications, the chassis must adequately scale to meet the required performance needs of the application for line-rate processing.
In an asymmetric application, ingress and egress bandwidths are not equal. Typically in these applications, ingress bandwidth operates at higher rates while egress bandwidth is handled in a peer-to-peer fashion to optimize packet routing.
In addition to data plane traffic, designers must deal with control plane traffic. Control plane traffic consists of element control plane data generated by line cards. Control plane data includes state management and routing table lookups or updates which are exchanged between node boards within the chassis fabric. Due to the flexibility of the ATCA architecture, control plane data can be exchanged between boards on either the fabric interconnects or the base interconnect depending on the platform design.
Lastly, in order to maximize throughput within a network element, platform designers need to consider mechanisms which regulate traffic while optimizing platform resources. These may include back-pressure and flow-control techniques found in the ASI spec.
2. Application Processing (Transformation) Applications
Transformation processing applications, much like routing apps, sit between two networks and as previously stated perform functions such as VPN, firewall, and deep-packet inspection. Transformation processing platforms (also known as egde/access platforms) are often found within wireless network elements or enterprises as content processing platforms. Within wireless networks, the bandwidth requirements tend to be symmetric and fairly constant while in enterprise apps bandwidth tends to be asymmetric in nature, especially with respect to various types of enterprise applications as previously noted, where traffic typically flows in one direction.
Within an ATCA chassis there are up to 14 processing blades (configuration dependent) with a backplane acting as a common interconnect with various types of switching arrangements enabled to allow data to pass between blades and ingress/egress ports. The typical flow of data is from ingress ports where lines are terminated on blades, traffic is normalized, load-balanced, and routed to other blades within the chassis for further processing. Incoming protocols are generally transformed or encapsulated on the ingress board to improved efficiency or reduce overhead and decrease bandwidth requirements over the backplane before being routed to adjacent boards within a chassis.
Processing blades typically terminate the session at some level or process the upper-layer protocols before forwarding the traffic to the destination line access blade on the egress side. The egress line card terminates the internal session and adjusts the protocol to match the external network, as shown in Figure 4 . Notice in Figure 4 that the local switch handles two passes of the traffic between ingress and egress.
In transformation applications, the protocol processing blades can consume or generate traffic. They can also inspect or modify the upper layers of the packets they process. Inspection operations distinguish transformation apps from the transit apps in that system end-to-end latency is typically much greater due to the processing, and the ingress and egress flows are not necessarily symmetric. This leads to a more complex traffic model, but one that typically does not suffer from the congestion issues faced by transit systems at the core network.
The ingress line card fabric access bandwidth is typically many times the capacity of the protocol-processing blades. Therefore, the balance between processing blades and line cards is typically many to one. Fabric performance is relatively high, using per-flow management and a speed-up ratio of two or three to one over the ingress line rates. Ingress and egress line cards might not be symmetric. Sometimes for example several OC-3 lines aggregate into a fewer number of OC-12 lines.
The transformation application can make use of either Ethernet or ASI (or even InfiniBand) equally well. Since ingress traffic is relatively non-bursty and load balanced at the line interface, statistical blocking is not a major issue. Thus, the more advanced flow and QoS features of ASI are not required.
However, the link granularity of ASI in this space is highly beneficial as this class of applications requires finer increments of fabric bandwidth to better match the wide-range of expected performance. Furthermore, the switching function is a simple Layer 2 operation that ideally would take advantage of virtual output queuing to minimize the amount of speed-up that is required.
Through modularization and standardization, the ATCA architectures inherent flexibility and scalability lend it to address the needs of a variety of applications including the transit and transformation apps with their fabric focus discussed in this article. As shown in this article, the ASI fabric provides a low overhead, scalable fabric that meets the demands of high-end transit applications while not being overly complicated for lower performance, transformation and server applications. In addition, it can be seen that fabric management in ATCA is a fairly straight-forward process, especially when using the AS fabric.
About the Authors
Jay Gilbert is a senior technical manager in Intel's Modular Hardware Platforms group. He has a BSEE from the Oregon Institute of Technology and an MBA from Portland State University. Jay can be reached at .
Karel Rasovsky is an engineer in Intel's Communication Infrastructure Group. He holds an MS degree in computer engineering from Florida Atlantic University and a BSEE from University of Technology in Brno, Czech Republic. Karel can be reached at .