This “Product How-To” article focuses how to use a certain product in an embedded system and is written by a company representative.
The ever-increasing demand for wireless broadband by mobile data usersis constantly forcing telecom standards bodies to look for newerspecifications which can deliver multi-megabit throughput and lowerlatency levels.
In response, 3GPP developed Long Term Evolution which brings with itnot only higher throughput but also a flatter architecture with lowerlatencies and an all-IP infrastructure promising reduced operatingexpenses.
LTE's packet-optimized radio access technology, along with anEvolved Packet Core (EPC) network suitable for “always on” operation,lowers the cost per bit and co-exists nicely with other existing accesstechnologies.
One key challenge of LTE, however, is how core network componentswill keep up with massive increases in access link throughput (e.g.,seven times HSPA data rates). How evolved core network equipmentmanufacturers (NEPs) rise to this challenge will be critical fordetermining subscriber satisfaction with their LTE service experiences.
To focus on the challenge, this article discusses the “bestpractices” approach for building an ATCA Serving Gateway node, the verybackbone of the LTE network user plane.
LTE Network Architecture & theServing Gateway Application
An LTE network consists of the network elements eNodeB (or BaseStation, part of E-UTRAN) and Access Gateway to support control anduser plane access to LTE User Equipment (i.e., wireless devices).Access Gateway functionality is supported by the Mobility ManagementEntity (MME) to manage the control plane, the Serving Gateway (SGW) forthe user plane, and the PDN Gateway for access to the Internet.
EPC nodes are also connected to legacy systems (GERAN and UTRAN) sothat LTE systems can co-exist with existing access technologies andfacilitate seamless handovers (Figure1 below ).
|Figure1: Basic LTE Network Architecture|
The SGW terminates user plane access for the eNodeB, routes userplane traffic, performs accounting and monitoring of user data, andacts as a local mobility anchor point for handovers. Below is a summaryof SGW functionality:
* Packet routing & transfer functions
* Uplink & Downlink charging per UE, PDN & QCI
* Accounting on user & QCI granularity for inter-operator charging
* Setting end-marker to the transmission to assist in eNodeB reorderingfunction* Transport-level packet marking in the uplink & downlink based onthe QCI of the associated EPS bearer
* Lawful intercept
* User session establishment & management
* ECM-IDLE mode DL packet buffering & triggering network triggeredservice request
* Mobility management (Mobility anchor for Inter eNodeB handovers &inter-3GPP handovers)
* Hardware system management
* Configuration management
* Call data logging
* Performance statistics
* Fault management
An SGW platform requires many capabilities including:
1) Optimizationfor Packet (Bearer Plane) Processing: Since the SGW is designedto do user plane functionality for higher bandwidth systems, theplatform on which the SGW resides should be optimized for packetprocessing.
2) Deep PacketInspection (DPI): An SGW requires DPI capability from theplatform to support lawful intercept, policy control, and QoSenforcement to manage access to services and available bit rate duringtimes of congestion. DPI can also support functions such as targetedadvertising.
3)Carrier-Grade Reliability: A field-proven, highly-availablearchitecture is needed to eliminate data/control PDU loss with noswitch-over delay.
4) ComputingPower: An SGW platform needs substantial compute power for thecontrol plane signaling between MME, SGSN, and PGW.
5) Scalability: An SGW platform should be scalable so that capacity may be increasedeasily and robust enough to handle high load conditions. In addition,it should also be possible to co-locate functions in a single shelf(i.e., a single chassis may contain both MME and SGW functions).
6) Quality ofService (QoS): QoS can be included as part of the DPI servicecontrol and enforcement functions.
7) IP Security,Threat Management & Intrusion Detection/Prevention: Beingpart of an all-IP LTE network node, the SGW requires thesesecurity-related functionalities.
Because mobile users are expected to hold carriers responsible forsecurity breaches (much more so than users on wireline broadbandconnections), it is vital for wireless operators to ensure thatsubscribers are protected from malware reaching their handsets.
Building an ATCA Serving Gateway
There is a growing requirement from NEPs for pre-integrated,system-level products that combine standards-based ATCA hardware,system and element management software, and high availability (HA)middleware.
Pre-integration saves developers significant time and engineeringcosts, thereby allowing NEPs to get to market much faster than theycould have even a few years ago ” and often in less than 12 months.
Purchasing pre-integrated platforms also facilitates a commoninfrastructure approach which allows NEPs to extend their solutions toinclude multiple network elements.
In fact, in response to the current economic downturn, the trend nowis for NEPs to outsource even more of what they formerly didthemselves. It is common today to see requests for pre-integratedapplication software and higher-layer management tools includingcomplete unified management software and Element Management Systems(EMS).
|Figure2: Serving Gateway Interfaces|
In terms of applications, the requirements range from protocolstacks pre-integrated with HA middleware to complete functionalelements such as the SGW application modules shown in Figure 2 above.
What's more, NEPs and service providers are starting to recognizethat Deep Packet Inspection (DPI) may be applied to many functionswithin SGW nodes to help increase Average Revenue per User (ARPU) forcapabilities such as tiered services, congestion management, andsecurity.
As a result, DPI is a rapidly growing area for which wirelessdevelopers are starting to request significant hardware, software, andintegration support. For example, DPI modules that can detectapplication protocols and apply traffic shaping enable NEPs tointegrate sophisticated policy control and enforcement directly intothe SGW or PDN Gateway, which is often preferred over delivering suchfunctionality with separate devices. Figure3 below shows a typical ATCA system-level solution for an SGW.
|Figure3: ATCA-based Serving Gateway Solution Stack|
Fault-Tolerant ATCA PlatformArchitecture
Figure 4 below shows thefunctional connectivity of an ATCA-based SGW built using the ContinuousFlexTCA Wireless pre-integrated platform. With a standards-based,bladed approach like ATCA it is relatively straightforward for NEPs toscale the solution from small to medium to large with common platformsizes being 2-slot (2U or 3U), 6-slot (5U or 6U), and 14-slot (12U).
|Figure4: Serving Gateway Fault-Tolerant Architecture|
A carrier-class ATCA system (Figure5 below ) consists of compute blades, packet processing blades,switches, system controllers, and shelf manager blades. Redundancy isemployed with all platform components to avoid any single point offailure, and payload blades support 1+1 (active / standby), N+1 (Nactive with one standby spare), and N+M (N active with M standbyspares) configurations.
HA middleware and applications checkpoint across blades andclusters, while power and configuration management is controlled viashelf managers and Essential Services software.
The system controller is where HA middleware runs with a persistentstorage database that contains all configuration information, HAmiddleware, unified management interface, HPI services, and EssentialServices.
|Figure5: ATCA HA System Architecture|
The system controller blade maintains the application model per theApplication Management Framework (AMF) and runs in active-standby modewhereby all database information is replicated across the active andstandby blades.
The EMS interface to the ATCA system can be an SNMP agent, CommandLine Interface (CLI), Web/XML, or NETCONF; at a minimum, SNMP and CLIare present. The EMS/NMS will contact the SNMP agent of the systemcontroller to manage the hardware and software components in thechassis.
The middleware redirects hardware management requests coming fromthe EMS to the HPI implementation which has connectivity to the shelfmanager. As a result, the EMS can manage all hardware components in thechassis.
Compute blades are where the SGW control plane user application(s)run and contain control plane protocols (eGTP-c), application andprotocol MIBs, middleware, and a platform management interface (calledEssential Services in FlexTCA). The compute blades are configured in a1+1 redundancy model.
Packet processor blades are where the SGW user plane application(s)and DPI applications run. They contain the user plane protocol(eGTP-u), application and protocol MIBs, middleware, and an EssentialServices interface. The packet processor blades are configured with N+Mredundancy.
The hub switch and network uplinks are configured for 1+1redundancy. The FlexTCA system in particular is supported by what isknown as Layer 2 High Availability (L2HA) for checkpointing networkconfiguration. L2HA provides hub switch status checkpointing andfailover, bonding drivers, and dual star connectivity to avoid linkloss on the base and fabric aspects of the switch.
The shelf manager is an important chassis-related function whichmanages hardware elements including fans, power supplies, and blades.It manages the blades through an Intelligent Platform ManagementInterface (IPMI) which runs over the IPM Bus, and all ATCA blades havean IPM controller for performing activities as directed by the shelfmanager. If the shelf manager cannot recover from a fault condition,the system controller will treat this as un-recoverable fault andfailover to the standby.
Application Failover & TrilliumProtocol Integration
As shown in Figure 6 below , HAmiddleware runs on the system controller blade and monitorsapplications running on active blades through the middleware agentrunning on each blade. Failures are detected by exchanging heartbeatsbetween nodes, and upon failure of an active node the standby will takeover and become the active node.
Checkpointing services in HA middleware are used to share data andparameters between active and standby components. HA middlewareprovides services to create checkpoints, manage the lifecycle ofcheckpoints, and establish mechanisms for active components to writethe latest state and for standby components to read the latest state.
|Figure6: Application Redundancy and Failover Architecture|
Protocol stacks such as Continuous Computing's Trillium protocolsoftware product line consist of sophisticated protocol layers withoperating system abstraction and relay called inter-processcommunication.
Since most stacks do not natively comply with recent ServiceAvailability Forum (SAF) specifications on their own, functions arewritten to provide support for AMF-AIS application programminginterfaces (known as an AMF interface engine) and IMM-AIS applicationprogramming interfaces (known as a Stack Manager, or SM interfaceengine).
In the case of Trillium stacks ported to SAF-compliant HA middleware(Figure 7, below ),the AMF interface engine handles life cycle requests such asinstantiation, suspension, and termination as well as the highavailability states like active and standby.
Instead of propagating the request directly to the stack, the AMFinterface engine also takes care of other housekeeping functions andinforms the patented Trillium DFT/HA (Distributed Fault-Tolerant / HighAvailability) core component to take care of high availabilityrequests.
Note too that the Trillium DFT/HA core consists of elements thatmanage the HA state of the Trillium layers and message delivery betweenthese layers. Such functionality resides in the system controller bladebecause it controls Trillium stacks running on multiple payload bladesand also follows the same lifecycle as the middleware in the systemcontroller.
A Trillium Protocol Specific Function (PSF) is used to update theprotocol-specific state information to the standby unit, and a TrilliumLoad Distribution Function (LDF) takes care of distributing trafficacross available resource sets.
|Figure7: Trillium Stack Integration into HA Middleware|
Load Balancing & Deep PacketInspection
As shown in Figure 8 below ,user data traffic is diverted to the eGTP-u server running in thepacket processor blade by the load balancer in the hub switch. The loadbalancer must extract the packets from GTP tunnels and consistentlydirect sessions to the same blade, processor, and hardware thread onthe packet processor blade(s).
It is vitally important to maintain the associations betweenrespective GTP flows to the serving blade(s). The GTP-u serverde-tunnels the user packets, inspects the user traffic, and thenclassifies, authenticates, forwards, or blocks as determined by policycontrol parameters.
Extracted user packet are passed to a DPI engine for furtherprocessing and action, such as policy control, intrusion detection andprevention (IPS/IDS), lawful intercept, charging (service-basedcharging as well capacity-based charging), bandwidth shaping, etc.
All such functions may be implemented within or in conjunction withthe SGW itself. DPI servers can run on compute-centric resources or ondedicated packet processor blades such as Continuous Computing'sFlexPacket ATCA-PP50s which deliver up to 10G per blade.
Fast path software increases performance dramatically, especiallyfor small packet sizes, and an experienced professional services teamcan extract optimized performance for nearly any application type.
|Figure8: Sample ATCA Load Balancing Architecture|
In addition to the classic “5-tuple” lookup, DPI firewalls haveapplication awareness and understand protocols such as HTTP, SMTP,POP3, IMAP, and FTP, and also recognize the actual applications thatrely on those protocols (e.g., BitTorrent, Skype, webmail services,etc). DPI IPS/IDS systems range from connection-oriented intrusions andDenial of Service (DoS) attacks to dynamic, content-based threats suchas viruses, worms, trojans, spyware, and phishing.
DPI systems are powerful and allow operators to classify traffic bymultiple parameters including subscriber, service, application,origin/destination, IP address, and more. Traffic classification canalso be based on usage and/or subscriber session traffic patterns.
Based on the network traffic classification and policies, DPIplatforms allow the service provider to control traffic flows includingblocking, rate shaping, and redirection of packets to differentdestinations. DPI platforms can also limit the data rate of users,block access, account and prioritize traffic, etc.
For example, with lawful intercept it is necessary to identify aVoice over IP (VoIP) session in a high-speed packet stream, copy theVoIP packets, and then send them to a lawful intercept point formonitoring.
In VoIP networks this process is further complicated by the factthat SIP messages can follow a different path in the network than theassociated RTP content.
With new communication media such as social networking and instantmessaging, there is an increasing need for DPI to recognize specificapplications and extract session information (e.g., calling party,called party, duration, etc.) to meet legal intercept requirements.
Policy management functions supported by DPI systems are charging(whether per-access billing or flat rate), authentication of access(e.g., service access entitlement), QoS prioritization oflatency-critical traffic (e.g., prioritization over VoIP, P2P, orgeneral web-browsing traffic), and congestion management based onparameters such as service level or other subscriber-specific flags,time of day, etc.
ATCA is well-suited to implementing the Serving Gateway node for LTEnetworks. Through an outsourced platform acquisition strategy, NEPs canattain the key SGW platform functionality required while minimizingtime-to-market and development expense.
And, by incorporating the latest system technologies such as loadbalancing and DPI, the SGW represents a fresh opportunity to increasenew revenue streams while managing subscriber traffic according tooperator-driven priorities.
An experienced wireless and DPI solution provider such as ContinuousComputing can help NEPs with such ATCA platform integration and, in theprocess, free up in-house resources to focus on new applicationdevelopment and unique market differentiation.
Sridharan Natarajan is Lead Engineer and Karl Wale is Director PLM at ContinuousComputing.
1. 3GPP TS 23.401 GPRSenhancements on EUTRAN access
2. 3GPP TS 36.300 E-UTRANoverall description
3. 3GPP TS 29.274 Evolved GPRSTunneling Protocol for control plane
4. 3GPP TS 29.281 GPRSTunneling Protocol User Plane