PCI Express vs. Ethernet: A showdown or coexistence?

Until now, the boundaries between PCI Express (PCIe) and Ethernet were clearly defined — PCIe as a chip-to-chip interconnect and Ethernet as a system-to-system technology. There are very good reasons (and a few less so) why these boundaries have endured. Regardless, these two technologies have definitely co-existed. While nothing is on the horizon that will change this fundamentally, PCIe is showing every sign of growing and competing with Ethernet for space once the domain solely of Ethernet – specifically, within the rack. Can it really compete and win against Ethernet?

Current Architecture
Traditional systems currently being deployed in volume have several interconnect technologies that need to be supported. As Figure 1 shows, Fibre Channel and Ethernet are two examples of these interconnects (obviously, there could be more — for example, InfiniBand).

Figure 1: Example of a traditional I/O system in use today

This architecture has several limitations:

  • Existence of multiple I/O interconnect technologies
  • Low utilization rates of I/O endpoints
  • High power and cost of the system due to the need for multiple I/O endpoints
  • I/O is fixed at the time of architecture and build… no flexibility to change later
  • Management software must handle multiple I/O protocols with overhead

This architecture is extremely disadvantaged by the fact that multiple I/O interconnect technologies are in use, thereby increasing latency, cost, board space, and power. This architecture would be somewhat useful if all the endpoints were being used 100 percent of the time. However, more often than not, the end-points are being under-utilized, meaning system users pay for the overhead for that limited utilization. The increased latency is because the PCIe interface native in the processors on these systems needs to be converted to the multiple protocols. (Designers can reduce their system latency by using the PCIe that’s native on the processors and converge all endpoints using PCIe.)

Clearly, sharing I/O endpoints (see Figure 2 ) is the solution to these limitations. This concept appeals to system designers because it lowers cost and power, improves performance and utilization, and simplifies design. With so many advantages to sharing endpoints, it is no surprise that multiple organizations have tried to achieve this; the PCI-SIG, for example, published the Multi-Root I/O Virtualization (MR-IOV) specification to achieve just that goal. However, due to a combination of technical and business factors, MR-IOV as a specification hasn’t taken off, even though it has been more than five years since it was released.

Figure 2: A traditional I/O system using PCI Express for shared I/O

Additional advantages of shared I/O are:

  • As I/O speeds increase, the only additional investment needed is to change the I/O adapter cards. In earlier deployments, when multiple I/O technologies existed on the same card, designers would have to re-design the entire system, whereas in the shared-I/O model, they can simply replace an existing card with a new one when an upgrade is needed for one particular I/O technology.
  • Since multiple I/O endpoints don’t need to exist on the same cards, designers can either manufacture smaller cards to further reduce cost and power, or choose to retain the existing form factor and differentiate their products by adding multiple CPUs, memory and/or other endpoints in the space saved by eliminating multiple I/O endpoints from the card.
  • Designers can reduce the number of cables that crisscross a system. With multiple interconnect technologies comes the need for different (and multiple) cables to enable bandwidth and overhead protocol. However, with the simplification of the design and the range of I/O interconnect technologies, the number of cables needed for proper functioning of the system also are reduced, thereby eliminating the complexity of the design and delivering cost savings.

Implementing shared I/O in a PCIe switch is the key enabler to architectures depicted in Figure 2 . As mentioned earlier, MR-IOV technology hasn’t quite taken off and a prevailing opinion is that it probably never will. To the rescue comes Single-Root I/O Virtualization (SR-IOV) technology, which implements I/O virtualization in the hardware for improved performance, and makes use of hardware-based security and quality-of -service (QoS) features in a single physical server. SR-IOV also allows the sharing of an I/O device by multiple guest operating system (OSes) running on the same server.In 2007, the PCI-SIG released the SR-IOV specification that calls forone physical PCIe device — be it a network interface card, host busadapter or host channel adapter — to be divided into multiple virtualfunctions. Each virtual function can then be used by a virtual machine,allowing one physical device to be shared by many virtual machines andtheir guest OSes.

This requires I/O vendors to develop devicesthat support SR-IOV that provide the simplest approach to sharingresources or I/O devices among different applications. The trend hasbeen that most of the endpoint vendors are supporting SR-IOV and manymore will continue to support this requirement.
Adding to its many advantages already cited here, PCIe is also a lossless fabric at the transport layer.

ThePCIe specification has defined a robust flow-control mechanism, whichprevents packets from being dropped. Every PCIe packet is acknowledgedat every hop, insuring a successful transmission. In the event of atransmission error, the packet is replayed again – something that occursin hardware, without any involvement of upper layers. Data loss andcorruption in PCIe-based storage systems, therefore, are highlyunlikely.

PCIe offers a simplified solution by allowing all I/Oadapters (10GbE or FC or others) to be moved outside the server. With aPCIe switch fabric providing virtualization support, each adapter can beshared across multiple servers and at the same time provide each serverwith a logical adapter. The servers (or the virtual machines on eachserver) continue to have direct access to their own set of hardwareresources on the shared adapter. The resulting virtualization allows forbetter scalability wherein the I/O and the servers can be scaledindependently of each other. I/O virtualization avoids over-provisioningthe servers or the I/O resources, thus leading to cost and powerreduction.

Table 1 provides a high-level overview of thecost comparison and table 2 provides the high-level overview of thepower comparison when using PCIe over 10G Ethernet.

Table 1: Cost savings comparison between PCIe and Ethernet

Table 2: Power savings comparison between PCIe and Ethernet

Theprice estimates are based on a broad industry survey, and assumespricing will vary according on volume, availability and vendorrelationships with regard to top-of-rack (ToR) switches and theadapters. These tables provide a framework for understanding the costand power savings by using PCIe for IO sharing — principally throughthe elimination of adapters.

This, of course, raises thequestion of why the cost and power comparisons aren't made using aper-port metric instead of gigabytes per second. The primary reason forthat is the trend these days is for data center vendors to charge bybandwidth being used and not by the number of connections. PCIe offers abandwidth that’s approximately 3x greater than 10G Ethernet and allowsvendors using it to charge more — if someone were to do acomparison/build a system with the same number of ports, the conclusionwould be the same – that PCIe offers more than 50 percent savingscompared to Ethernet.

Summary
This article has focusedon the cost and power comparisons of PCIe and Ethernet, however othertechnical distinctions between the two need to be considered, aswell. Still, with PCIe becoming native on an increasing number ofprocessors from major vendors, designers can benefit from the lowerlatency realized by not having to use any components between a CPU and aPCIe switch. With this new generation of CPUs, those designers canplace a PCIe switch directly off the CPU, thereby reducing latency andcomponent cost.

PCIe technology has become ubiquitous and theGen3 incarnation of this powerful interconnect technology — at 8gigabits per second (Gbps) per link — is more than capable ofsupporting shared I/O and clustering, providing system designers with anunparalleled tool to make their designs optimally efficient.

Tosatisfy the requirements in the shared-IO and clustering marketsegments, vendors such as PLX Technology are bringing to markethigh-performance, flexible, and power- and space-efficientdevices. These switches have been architected to fit into the full rangeof applications cited above. Looking forward, PCIe Gen4, with speeds ofup to 16Gbps per link, can only help accelerate and expand the adoptionof PCIe technology into newer market segments, while making it easierand economical to design and use.

Multiple global vendors havealready adopted this ubiquitous interconnect technology to enable thesharing of I/O endpoints and, therefore, lower system costs, powerrequirements, maintenance, and upgrade needs. PCI- based sharing of I/Oendpoints is expected to make a huge difference in the multi-billiondollar datacenter market.

However, Ethernet and PCIe willmaintain their co-existence, with Ethernet connecting systems to oneanother, while PCIe continues its fast evolution within the rack.

Krishna Mallampati is senior director of product marketing for the PCIe switches at PLX Technology , Sunnyvale, Calif. He can be reached at .

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.