Advertisement

Multicore networking in Linux user space with no performance overhead

Dronamraju Subramanyam, John Rekesh and Srini Addepalli, Freescale Semiconductor

February 26, 2012

Dronamraju Subramanyam, John Rekesh and Srini Addepalli, Freescale SemiconductorFebruary 26, 2012

Multicore SoC Software Programming Methods
Two models are prevalent in software programming for packet processing on the cores. One is pipeline processing where functionality is split across cores and packets are processed in pipeline fashion from one set of cores to the next as shown in Figure 2 below.

Figure 2. Pipeline processing model splits functional across cores and packets in a pipeline.

The more popular model is a run-to-completion model, where each core or a set of cores executes the same processing on a packet as shown in Figure 3 below.


Figure 3: In the run-to-completion network execution model, each core or a set of cores executes the same processing on a packet.

In the run-to-completion model, effective load balancing of packets across cores is important. It is also important to preserve packet ordering in flows, as network devices are not expected to cause re-ordering of packets in a flow.

This means that the packet scheduling unit should be intelligent enough to support a mechanism that ensures that packets of a flow are not sent to more than one core at the same time.

Otherwise the cores could complete processing of those packets at slightly different times and send them out in a different order. Thus order preservation mechanisms are an important part of the hardware scheduling unit, which can be leveraged by run-to-completion applications that are flow order sensitive.

It is often possible to combine pipelining with run-to-completion, where a group of cores are dedicated for certain application functions, another group to another set of functions and so on.

Within a group, all cores perform the same application processing on every packet, and once completed, hands off to the next group of cores that implement a different set of application functions.

High-performance Data Plane Processing
The software architecture of networking equipment typically comprises of data, control, and management planes. The data plane, also called the fast path, represents packet flows that have been validated and admitted into the system, and avoids expensive per packet policy processing.

Packets representing flows in the data plane pass through an efficient and optimized processing path, including some hardware accelerators. For example, a web download of a music file may move through the data plane of a device in the network path, after the device has established the flow as valid by processing the initial packets of that download in the control plane.

The control plane checks and enforces policy decisions that can result in establishing, removing or modifying flows in the data plane. It runs protocols or portions of protocols that deal with these aspects.

The management plane handles configuration of the device, such as installing policies or creating or removing virtual instances. It also manages other operational information and notifications such as device alerts. The rest of the article mainly concentrates on data plane processing.

< Previous
Page 2 of 5
Next >

Loading comments...