Data analysis challenges and best practices in a wireless world

Dan Joe Barry, Napatech

August 05, 2014

Dan Joe Barry, NapatechAugust 05, 2014

Recent technology advances continue to increase the speed of business. That being said, the 100 Gbps era is upon us. A myriad of analysis challenges will need to be overcome if network management and security application providers hope to remain relevant in this time of accelerated data growth. This will necessitate forward-thinking approaches that are scalable, deliver and understand data and accelerate application performance.

When delivered properly, data provides the clarity needed for actionable insights. When all the data within networks is understandable, it flows well and is secure. However, in this brave new world of data overload, managing all of it is a complex proposition.

Dealing With the Data Deluge
Global data center IP traffic is projected to rise from its current level of 4,000 exabytes per year to almost 8,000 exabytes in 2017—almost a doubling of traffic in just three years. Mobile network connection speeds will increase two-fold between 2013 and 2018, and global mobile traffic will increase nearly 11-fold during that period, Cisco projects. Over two-thirds of the world’s mobile data traffic will be video by 2018.

Connectivity speeds will also need to increase due to this rapid growth in data traffic. High-bandwidth applications such as video on demand and high-performance computing, as well as server virtualization and high-speed applications in data centers, will continue to drive adoption of 40 Gbps and 100 Gbps connections.

What the 100G Era Requires
Performance at connections up to 100 Gbps needs to reliably increase, while risk and time-to-market need to be reduced. Network equipment manufacturers must accomplish this, and also effectively manage and secure networks, while still handling a varied portfolio of 1, 10, 40 or even 100 Gbps products. Network services are agnostic to connection speeds and analysis will have to be performed at the same level across speeds ranging from 1 Mbps to 100 Gbps. Below is a list of best practices to ensure the network of today can move successfully into the 100G era.

Zero Packet Loss. High-speed solutions must be able to capture network traffic at full line rate, with almost no CPU load on the host server, for all frame sizes. Full line-rate packet capture with zero packet loss, frame buffering and optimal configuration of host buffer sizes removes the bottlenecks that can cause packet loss. It also reliably delivers the analysis data that network management and security solutions demand. Zero-loss packet capture is critical for applications that need to analyze all the network traffic in real time.

To ensure that no data is lost, frame buffering is a good feature to have as it can absorb data bursts. It can also remove application limitations, allowing frames to be transferred once the burst has passed. PCI interfaces provide a fixed bandwidth for transfer of data. This can limit the amount of data that can be transferred from the network to the application. Frame buffering is a critical feature for high-speed network analysis.

Insight Into Frame Classification. Understanding and insight are required for next-generation network analysis. Frame classification can provide details on the type of network protocols being used. For users who want to monitor network traffic in the most efficient way, it is important to be able to recognize as many protocols as possible, as well as extract information from layer 2-4 network traffic. Header information for the various protocols transported over Ethernet must be made available for analysis. This includes encapsulation and tunneling protocols.

Precise Time-Stamping. For many high-speed analysis applications, knowing when something happened, and the amount of delay in the network, is important. Assuring quality of time-sensitive services and transactions is often essential and requires high precision. In 100 Gbps networks, nanosecond precision is essential to assure reliable analysis. At 10 Gbps, an Ethernet frame can be received and transmitted every 67 nanoseconds. At 100 Gbps, this time is reduced to 6.7 nanoseconds.

To uniquely identify when a frame is received, nanosecond precision time-stamping is essential. Precise time-stamping of each Ethernet frame allows frames to be merged in the correct order. The result is a significant acceleration of performance as Ethernet frames can now be grouped and analyzed in an order that makes sense for the application and is not restricted by hardware implementations.

Flow analysis. Individual Ethernet frame analysis enables visibility into activity at a single point in the network. Network applications must able to examine flows of frames that are transmitted between specific devices (identified by their IP addresses) or even between applications on specific devices (identified i.e. by protocol and UDP/TCP/SCTP port numbers used by the application).

It is important to identify and analyze flows of data, in high-speed networks up to 100 Gbps, to gain an overview of what is happening across the network and then control the amount of bandwidth that services are using. It also allows for intelligent flow distribution, where frames are distributed to up to 32 CPU cores for massive parallel processing.

Accelerating Analysis Applications. High-speed solutions must provide guaranteed delivery of real-time data with information that allows quick and easy analysis. What will distinguish these is the ability to accelerate the performance of analysis applications. This can be achieved by reducing the amount of data to analyze, ensuring that applications are not overwhelmed and only processing the frames that need to be analyzed. One of the main challenges in analyzing real-time data in high-speed networks is the sheer volume of data. Reducing this amount of data can often accelerate the performance of analysis applications. This can be accomplished through features such as frame and flow filtering, deduplication and slicing.

Acceleration Features. Appliance vendors need to maximize the performance of their analysis applications, so 100 Gbps solutions must provide acceleration features. These features must off-load data processing that is normally performed by the analysis application. Some examples of off-loading features are: intelligent multi-CPU distribution, cache pre-fetch optimization, coloring, filtering and checksum verification. These free up CPU cycles, allowing more analysis to be performed faster.

Assistance With Tunneling. Because networks are often outside of the control of the sender, tunnels have been used to transport information reliably and securely. Tunneling creates challenges because the data to be analyzed is encapsulated in the tunnel payload and must first be extracted before analysis can be performed. This is an extra and costly data processing step. By off-loading recognition of tunnels and extraction of information from tunnels, high-speed solutions can provide a significant acceleration of performance for analysis applications.

This is definitely the case with mobile networks, in which all subscriber Internet traffic funnels through one point in the network: the GPRS Tunneling Protocol (GTP) tunnel between the signaling and gateway serving nodes. Monitoring this interface is crucial for assuring quality of service. Next-generation solutions will open up this interface, providing visibility and insight into the contents of GTP tunnels. Analysis applications can use this capability to test, secure and optimize mobile networks and services.

Meeting Tomorrow’s Needs Now
The explosive growth in mobile data traffic, cloud computing, mobility and big data analysis requires network equipment manufacturers to explore solutions that can help them stay one step ahead of the data growth curve. Primary factors in accelerating the network to 100G:

* PCI-SIG compliant products that will fit into any commercial off-the-shelf server will allow organizations to focus their development efforts on the application, not the hardware.

* A common Application Programming Interface (API) that enables applications to be developed once and used with a broad range of accelerators. This allows combinations of different accelerators with different port speeds to be installed in the same server.

* Reliable hardware platforms for the development of 100 Gbps analysis products. A 100 Gbps accelerator, for example, can intelligently manage the data that is presented for analysis, providing extensive features for managing the type and amount of data. Slicing and filtering of frames and flows, even within GTP and IP-in-IP tunnels, significantly reduces the amount of data. Look for deduplication features that can be extended in analysis software to ensure that only the right data is being examined.

* Software suites that provide data sharing capabilities to enable multiple applications running on the same server to analyze the same data. When combined with intelligent multi-CPU distribution, this allows the right data to be presented to the right analysis application, thus sharing the load. Intelligent features for flow identification; filtering and distribution to up to 32 CPU cores accelerate application performance with extremely low CPU load.

Moving Forward
Innovative approaches are arising to help manage exponential data growth without sacrificing performance. Organizations today need to accelerate network management and security applications, and scale with increasing connectivity speeds, in order to stay ahead of the data growth curve.

Daniel Joseph Barry is VP of Marketing at Napatech and has over 20 years experience in the IT and Telecom industry. Prior to joining Napatech in 2009, Dan Joe was Marketing Director at TPACK, a leading supplier of transport chip solutions to the Telecom sector.  He has an MBA and a BSc degree in Electronic Engineering from Trinity College Dublin.

Loading comments...

Most Commented

Parts Search Datasheets.com

KNOWLEDGE CENTER