Defining A Network API - Embedded.com

Defining A Network API

As I've pointed out in earlier columns, network processing is in turmoil as far as defining the most appropriate languages, operating systems, and programming methodologies to use. Beyond these issues, one essential factor that will dictate the direction these elements go is the nature of the common application-programming interface (API) that will have to emerge.

At recent network processing conferences that subject has been much on the minds of both CPU vendors and embedded systems developers who are trying to implement router/switch designs with the new architectures. Almost everyone grudgingly agrees that what is needed is a common API to allow programmers to write to a higher level of abstraction rather than get involved with all of the details of the underlying architectures.

The diversity of processor architectures was bad enough in the traditional embedded RISC and CISC environments. But the one facing developers in the network-processing milieu is many times more confusing: diverse CPU architectures, several multiprocessor schemes, and different ways of organizing and partitioning the various networking operations among these devices.

As the nature of the programming becomes more complex, this common networking API is being considered not necessarily as a way to avoid working at the hardware level, but as a way to provide a common set of procedures and definitions across all environments.

Of course, common APIs are not new. In the desktop world, there is the ubiquitous Windows API. In the embedded world, programming environments that are provided with operating systems such as VxWorks and Tornado provide such a service, among many other things. In the server world there is the OpenMP API that supports multi-platform, shared-memory parallel programming in C/C++ and Fortran on all architectures, including Unix and Windows NT platforms.

Usually, though, such APIs evolve (1) when a technology has reached a certain state of maturation; (2) where agreement on methodologies and procedures is already beginning to emerge; or (3) when just one or a few companies begin to dominate with a de facto standard. One of the benefits of an API is that it usually simplifies programming, setting off an explosion of development that makes a market grow rapidly and coherently.

None of these alternatives has happened yet in network processing. Even so, everyone tells me that it is becoming critically important to accelerate development of a common API at a point much earlier than it might happen naturally.

Several companies and industry groups, among them the Network Processing Forum, are working in that direction. But the job ahead seems almost insurmountable.

At the hardware level, the number of architectural choices is daunting, as well as the number of different functions involved in efficiently moving data packets into, through, and out of the switches and routers. Adding to the complexity are the different network market segments in which these devices are used. They are employed at the network core in the high speed routers and switches that move data packets around the Internet at 1, 10 and 40 Gbps rates, and at the edge in devices and switches that take this raw data flow, add services, redirect traffic, and provision the packets appropriate to the end system for which they are targeted.

Inside the switches and routers, a complex hierarchy of hardware functions has evolved: IP routing and switching, which involves switching, framing, classification, modification, encryption, compression, decryption and traffic queuing. There are also a variety of other ancillary functions involving segmentation and reassembly, flow control, header insertion and extraction, table lookup and maintenance, accounting and billing, load balancing and filtering, and statistics gathering.

To provide a coherent framework within which to structure these operations, on the hardware side at least these functions are usually segmented into data, control, and management plane functions. But that's where the commonality seems to end.

Depending on whom I talk to, the same function can be implemented in entirely different planes, depending on the company, the architecture it has chosen, and the network application being targeted. Take, for example, encryption and decryption. In some architectures and data/control plane definitions, these functions are in the data plane as part of a security pre- or post- processor, or alternatively, as a coprocessor operating in parallel with the data flow processor. In other architectures, such security functions are in the control plane.

A number of common programming environments have been proposed. In addition to the Common Programming Interface Forum (CPIX) and the Common Switch Interface Consortium (CSIX ) specifications (which have merged), there are a number of other proprietary APIs, such as those promoted by network processing software tool vendors such as Teja, Level 7, and Virata; and hardware vendors such as Intel, IBM, Motorola, and about a dozen startups.

But they are either hardware specific or target just one aspect of the problem such as the interface between the frame and the network processing element (NPE) in the data plane; between the fabric and the NPEs; between the NPEs and the memory and associated co-processing elements; and between the control plane and the data plane.

Efforts in organizations such as the Network Processing Forum have shifted away from trying to come up with one common API environment for all these operations. Instead, what is emerging is a much more nuanced and layered approach.

First of all, there will probably be at least two APIs, a horizontal one to deal with the commonality of functions that occur in the data plane, and a vertical API for control plane. But to allow for the fact that different architectural solutions and applications environments may require a particular API subset in the vertical rather than the horizontal or vice versa, the hybrid horizontal/vertical API has a very fine granularity. It incorporates specific API subsets for services, packet handling, namespace, classification, direction, modification and traffic management. Moreover, standardized mechanisms for interaction are being defined so that functions can move back and forth between the vertical and horizontal APIs.

Maybe the description I have come up with based on a number of conversations with those of you involved in defining such APIs is overly complicated. There should be a much simpler way of defining and describing the problem and the proposed solution.

If not, the apparent complexity makes me uncomfortable and reminds me of what happened to the Ptolemaic description of the universe when Kepler came along. Ptolemy's elaborate astronomical theory of the spheres with earth at the center of the universe worked all right on paper but was so complex that every new observational result from astronomers like Kepler led to more complications. The result was that everyone who had anything to do with the real world, that is, everyone except the theologians, simply ignored the theory when it came to doing practical navigation. In the case of the APIs being proposed for network processing, maybe complexity and simplicity is entirely in the eye of the beholder.

If you know a simpler way of describing the nature of the environment and the API that is evolving I'd like to hear about it. I still have a lot of other unanswered questions. Are efforts to come up with a common API at this time just too early in the evolution of the network processor? What will happen if it does not emerge? Should we wait for a knockdown, drag-out fight amongst the various competitors and go for the approach that the winner has developed as the de facto standard?

Or does someone out there have the great simplifying insights? We need the network equivalent of someone like Copernicus, who took the astronomical measurements of Kepler and added the insight of elliptical orbits with all planets, including the earth, circling the sun. That great simplifying concept made the theory of the earth circling the sun not only acceptable but created useful mathematical and conceptual tools for ocean navigators of the day.

Bernard Cole is the managing editor for embedded design and net-centric computing at EE Times. He welcomes contact. You can reach him at or 520-525-9087.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.