Using multi-root (MR) PCIe to extend nextgen multi-host storage & server switch fabrics

Akber Kazmi

May 19, 2009

Akber KazmiMay 19, 2009

In embedded designs, enterprise systems and data centers, managers typically look for ways to optimize the use and improve the performance of capital equipment. In the recent past, we have seen the introduction of several methodologies intended to maximize available system resources, such as cloud computing, grid computing, utility computing, cluster computing, server virtualization, and I/O sharing.

Whether it be compute power, storage capacity or bandwidth of the interconnect devices on the switch fabric, each one of these methodologies attempts to get more for less.

The majority of the current network server infrastructure is based on x86 server architecture and RISC CPUs, Ethernet and Fibre Channel (FC) switches, and FC/Ethernet I/O devices. PCI Express (PCIe), which has been commonly perceived as a chip-to-chip single-host interconnect technology, is quietly making headway into switch fabrics that can replace current server and storage fabrics.

This article will discuss how the introduction of new PCIe multi-root (MR) switches impact the development of future switch fabrics for enterprise systems and data centers.

Pressure Points
Let's look at the "pressure points" of an information technology professional tasked to serve a broad client base, making it necessary to administer all of his/her clients' applications, as well as ensure efficient management and maintenance of the data center equipment.

In data centers, servers and storage systems are connected together and to the outside world through routers, switches and directors. Traditionally, data centers use dedicated servers for each application and dedicated switching systems for each traffic type. However, most applications do not run 24/7, leaving servers and switches underutilized.

This traditional method raised several concerns, such as a low rate of return on expensive equipment, highly inefficient use of power caused by underutilized machines, and the large amount of valuable physical space required by these data centers. Running dedicated servers for each individual application/service and a segregated network for each poses another set of challenges from a management perspective, as each application/service calls for a unique skill set.

Ideally, system managers would like to have flexibility to access any host, I/O device, application, database or other resource instantly and reliably. Although we are not quite there yet, capabilities to achieve this goal are being realized in small increments. For example, software-based virtualization techniques started to roll out in 2007 and are now broadly deployed.

This technique, as illustrated in Figure 1 below, enables the use of a single host for multiple applications. This new virtualization-through-software technique also allows for the splitting and executing of a single task on multiple host CPUs or moving tasks from one CPU to the other. It illustrates an environment running multiple systems images (SIs) and applications on a single-host CPU.

Figure 1. Software-based virtualization techniques

Although each server blade physically houses only one network interface card (NIC) and one host bust adapter (HBA), the software creates multiple virtual NICs and HBAs for each interconnect technology (Ethernet, FC, etc.)

This works well for serving multiple applications on each server blade but burdens the host CPUs with the task of running multiple virtual NICs and HBAs. Additionally, each server blade still uses a dedicated NIC and HBA as required by the traditional approach.

< Previous
Page 1 of 4
Next >

Loading comments...

Most Commented

  • Currently no items

Parts Search Datasheets.com

KNOWLEDGE CENTER