Designing a more flexible core for the multi-gigabit campus network
A confluence of factors is placing the campus core, the foundation of network architecture, under an increasing amount of stress. These include the introduction of new Wi-Fi 6 access points (APs) with high throughput rates, the proliferation of IoT devices, rapid migration to the Cloud and an evolving data center that is shifting away from chassis-based switches. Let’s take a closer look at these trends below.
Wi-Fi 6 (802.11ax)
First introduced in 2009, Wi-Fi 4 (802.11n) APs offered a throughput rate up to 600 megabits per second. As such, one gigabit Ethernet ports, now standard on most enterprise switches, was sufficient to prevent a bottleneck on the switch side. Wi-Fi 5 (802.11ac) Wave 2 APs – which hit the market in 2013 –achieved a throughput rate in excess of one gigabit per second. These speeds created a potential performance bottleneck between the AP and one gigabit switch ports. In turn, this prompted an interest in multi-gigabit switching technology and drove the adoption of the 802.3bz standard for 2.5/5/10 Gigabit Ethernet (GbE) ports.
Next-generation Wi-Fi 6 APs (802.11ax) have already begun shipping, with IDC forecasting Wi-Fi 6 (802.11ax) deployment ramping significantly in 2019, becoming the dominant enterprise Wi-Fi standard by 2021. The new Wi-Fi 6 (802.11ax) standard offers up to a four-fold capacity increase over its Wi-Fi 5 (802.11ac) predecessor, making the need for multi-gigabit ports on Ethernet switches even more pressing. Many organizations are working to proactively eliminate potential bottlenecks by purchasing multi-gigabit switches – even before deploying Wi-Fi 6 APs.
Perhaps not surprisingly, increased port speeds are driving the need for faster networks at the aggregation and core. Campus network customers are recognizing the need to upgrade to 40 GbE and 100 GbE for the backbone infrastructure required to handle the increased throughput at the edge of the network.
IoT & LTE
In addition to a new generation of wireless APs with faster throughput, the proliferation of IoT devices and the data they generate is placing unprecedented demands on campus networks, leading to issues such as latency. These devices, combined with applications such as 4K video streaming or surveillance video applications feeding machine-learning models, for example, are projected to drive internet traffic to 278,000 petabytes per month by 2021. While many IoT devices connect wirelessly, some are designed to plug directly into Ethernet, thereby increasing the demand for additional data on a campus network.
It should also be noted that campus networks will likely come under further strain as CBRS (private LTE+5G) arrives in 2019 and begins routing backhaul traffic through local switches. Put simply, CBRS offers the opportunity to leverage 3.5 GHz spectrum and enables organizations to establish their own LTE networks. This makes it ideal for in-building and public space applications where cellular signals are weak, or spectrum is limited, but data demand is not.
The Cloud & The Evolving Data Center
Campus networks are also being affected by the continued migration of mission critical applications to the Cloud. Although the shift to Cloud-based applications has resulted in the significant downsizing of large on-premise data centers, local data centers are continuing to operate, albeit at a reduced capacity. Moreover, the effective use of Cloud applications requires always-on, reliable, high-speed and low-latency access to offsite servers.
While the growth of the Cloud means on-premise data centers are becoming leaner, industry trends suggest that more organizations will have comparably smaller IT teams to manage servers. This will require simpler, more flexible networking options for connecting servers and storage systems via 10GbE and 25GbE. Fortunately, the growth of hyperscale data centers and their mass deployment of 100GbE and 25GbE are driving down the cost of associated transceivers, helping to lower the cost of 100GbE for campus networks.
The Chassis Is Out, Stackable Switches Are In
As data centers become leaner, large chassis-based switches are too expensive to purchase and maintain and overly complex to configure and manage. Indeed, traditional enterprise networks were architected to leverage chassis-based switches at the core and aggregation (as well as in the data center) and to deliver reliable, high-speed routing capabilities. However, this paradigm forces enterprises to pay massive amounts of money – up front – for capabilities that are often never fully utilized and result in forced forklift-upgrades when maximum capacity is reached.
Fortunately, recent advances in commercially available network processors provide the technology to package these capabilities into a more flexible and stackable fixed form factor. Such switches enable enterprises to adopt a simplified pay-as-you-grow model that simplifies the rollout of next-generation switches and offers a more flexible network topology. Moreover, certain switches on the market today provide linear scaling for up to 12 switches per stack. Those that offer stacking via standard Ethernet cables and optics allow customers to stack across long distances between multiple wiring closets, as well as floors and buildings simplifying management.
Stackable switches can also be designed to assure high availability with in-service software upgrades across a stack. This enables easy software upgrades – one switch at a time – without any downtime. Put simply, stackable switches provide the capabilities of a chassis in a more flexible, scalable design that requires less upfront investment, along with lower power and cooling requirements.
The campus core is under an increasing amount of stress as networks evolve and adapt to new user demands and device requirements. These include the introduction of Wi-Fi 6 access points (APs) offering up to a four-fold capacity increase over Wi-Fi 5 (802.11ac), as well as the proliferation of IoT devices and the petabytes of data they generate. In addition, campus networks must provide always-on, reliable, high-speed and low-latency access to offsite servers as mission critical applications continue their migration to the Cloud. And as data centers become leaner, most large chassis-based switches are now too expensive to purchase maintain and overly complex to configure and manage. These factors demand a high-performance campus core that is flexible, scalable and easily managed.
Siva Valliappan is the Vice President of Wired Products at Ruckus. Prior to Brocade/Ruckus, Siva was with Cisco as Director of Product Management responsible for Software, Cloud Management and Network Services of Cisco’s family of enterprise Ethernet fixed switches. He was also Cisco's first product manager of IOS Security and key architect behind Cisco's IOS Security solutions. Siva holds a bachelor’s degree in computer engineering from Santa Clara University and is a Cisco Certified Internetwork Expert (#2929) in Routing and Switching.