Network CPU Vendors Expand Horizons - Embedded.com

Network CPU Vendors Expand Horizons

In a recent column I predicted that the dominant paradigm for the new net-centric computing environment would be some type of data flow or I/O architecture. Since then, I've seen indications that the shift is indeed in that direction.

One place the shift can be observed is deep down in the information superhighway superstructure, in the network routers and datacom/telecom switches of the wide area network (WAN). The new paradigm has taken physical form in a cornucopia of new embedded network processors. These highly parallel multiprocessor dataflow-oriented designs have come into existence in response to major communications vendors rushing to build as much as high-bandwidth connectivity as quickly as possible.

When I wrote that earlier column correcting Sun's slogan (the network is not a computer, it's a data flow processor), it was unclear to me where and when the breakout into various other market segments would occur, and why. Which markets would take advantage of the new architectures? And, in response to what particular market dynamics and technology needs?

The answers are beginning to emerge.

When? Now.

Where? Initially, right there on the other side of the firewall where the network processor vendors have been targeting their wares; inside, rather than outside the server-based Internet Data Centers (IDCs) that are being built as fast as possible just to handle current traffic loads. The types of uses and the number of users are already escalating to the point that managers of IDCs are concerned about where they were going to get the I/O bandwidth into and out of the IDCs and among the various elements in the server clusters and system area networks.

Increasing the pressure on the IDC is a move away from distributed desktop computing back to the traditional client/server model, with the centralized corporate servers in charge and desktops and laptops containing a minimum of storage and limited local functionality. As megabit/sec broadband filters down to the average consumer, Web services, in which the user device is simply a terminal and most functions and storage done out on the Web, provided as services, is only increasing the strain on the current IDC infrastructure.

Inside the clusters of servers, the storage area networks, and the network attached storage sites, there has been a shift to high bandwidth 1-to-10 Gigabit/s packet-based point-to-point switched-fabric interconnects using a variety of schemes including Fibrechannel, SCSI, iSCSI, serialized PCI-X, and now Infiniband. They are being deployed to move and re-allocate storage and compute resources within the IDC, both locally and among geographically distributed clusters of clusters, clusters of network-attached storage systems (NAS) and storage area networks (SANs) and clusters of IDCs.

Since I wrote that first column on the subject, I had a chance to discuss this trend with Charlie Cheng, president and CEO of Lexra, Inc. (San Jose, Calif.). Like many other network processor vendors, he has been looking for alternative opportunities while waiting for the general network marketplace to gain some traction and start growing again. Even in a down or flat market in the wide area network, he believes that within the data center there will be a shift from traditional sequentially-based RISC architectures to the much more highly parallelized network processors used in the data and control planes of the switches and routers in the external WAN environment.

Unlike the somewhat linear growth of servers on the inter-network outside and the storage required, the number of switches needed internal to the IDC will grow non-linearly as the size of the network it serves externally grows. Not in terms of bandwidth necessarily, but in terms of numbers of users, the number of accesses, the response time of the devices and users accessing the IDC and the kind of services required from the IDCs.

The reason for such nonlinear growth has to do with the absolute requirement within an IDC's NAS/SAN clusters to prevent even a single point of failure. For reliable access to data, the pure switched fabric approach used within the IDC requires that every server and server cluster within the IDC be connected to every storage device along two paths, so that a failure of one particular device will not bring down the internal IDC network. In the past, when typical corporations, internet service providers and application service providers only required a few servers and a few storage devices, this was not much of a problem. But now, as the size of the external inter-network, the number of servers and storage devices increases, the switched fabric grows much more complicated, with multiple layers of 8- or 16-port fabric switches to avoid any single point of failure.

Also accelerating the number of switches required in an IDC — as well as their intelligence and their ability to process packets in an extremely parallelized manner — is the drive among managers of such facilities away from manual methods of shifting data around the IDC internally (in response to external I/O loads) toward the holy grail of storage: virtualization.

In the virtualization framework, what IT managers are looking for is a solution in which much of the mechanics of managing data flow resources is buried in the infrastructure of the system. “But this is clearly far beyond what the dumb switches that are currently used can handle and will test the resources of smarter switches using standard RISC design techniques,” said Cheng. However, it is well within what chip builders have been manufacturing and WAN switch and router providers such as Cisco Systems Inc. have been building into their systems.

In the past general-purpose RISC vendors have had Moore's Law on their side allowing them to stay ahead of the demand curve. In communications, according to Cheng, that law has run head on into Shannon's Law (capacity will quadruple every year) forcing a shift to an entirely new computing paradigm to keep up. “In WAN switches and routers, network providers ran into the brick wall at about OC3,” he said, “and I expect SAN/NAS to run into the same barrier at 1 to 2 Gbps.”

Even though chip designers have solved that problem in the WAN arena and are moving on to much higher OC 768 (40 Gbps) data rates, the data flow problems within the IDC will not be solved by simply using the same kind of NPs designed for the WAN. “Network processors have been able to solve the data flow problems in WANs because we were able to analyze the traffic and the environment and come up with specific building blocks,” he said, “and I have no doubt that the same methodologies and technology solutions can be applied to IDCs and network attached storage (NAS) or storage area network (SANs).”

At the data plane level where processing is all about moving data packets in and out and forward to their destination, the processing solution will be very similar. It is in the control plane where work still needs to be done. In wide area networks, processors in the control plane are concerned with such specialized functions as verification, classification, modification, compression, decompression, encryption, decryption and traffic shaping. “In the IDC/SAN environment, some of these functions will be the same and some slightly different, as well as some that will require totally new functional blocks,” Cheng said. “But the base architecture has been defined and it is now only a matter of modifying it to fit the specific requirements of the new Gbps SAN/IDC environment.”

The emergence of such a new market for network processors will have at least two consequences. Short term, it will give CPU vendors an alternate customer base for their advanced designs while the larger WAN market in switches and routers picks up. The market for advanced switches inside the IDC is estimated to grow from just $236 million in 1999 to as much as $2.8 billion in 2003. Longer term, when the WAN market picks up again the combination of the two will result in higher volumes, which in turn will lead to lower costs for the silicon. This will make it attractive for other segments of the computer market, particularly some of the more traditional embedded segments, to look at this new architectural paradigm and see where it might fit.

It will be interesting to see where this new data flow paradigm pops up next: industrial control, servers, embedded Internet devices, or — surprise, surprise — in the desktop?

Do you think that these new network processors will find a place in the Internet Data Center's switched-fabric connected servers and storage devices? Does this new architectural paradigm, or rather a reformulation of an older data flow approach, have a place anywhere else? Let me know what you think.

Bernard Cole is the managing editor for embedded design and net-centric computing at EE Times. He welcomes contact. You can reach him at or 520-525-9087.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.