The practical realities of 802.11n in home networking applications
Many articles have been written about the inner workings of 802.11n; however, few offer practical advice on how to get the most from an 802.11n network. The technology is very sophisticated and uses many tricks to deliver blazingly fast performance.
What most of us don't realize, though, is that we have the capability to unwittingly cripple the performance of our network. At the same time, it is also possible to squeeze out an even better experience than provided by the base standard. In this short article I provide a guide on some simple do's and don'ts.
802.11n isn't a finished IEEE standard yet so the correct thing to do is to add the suffix "Draft." Most products shipping today are certified to Draft 2.0 using a Wi-Fi interoperability specification. The final IEEE spec should be completed in early 2009.
Given the growing installed base of products already commercially available, most people expect that existing products will need just a firmware upgrade to be brought in line with the final specification.
802.11g versus 802.11n
First let's look at the major ways in which the two standards differ (Figure 1 below). There are two big benefits of 802.11n over 802.11g:
* Improved range
* Higher throughput
So even though 802.11n is still shy of final, it is certainly 100% compatible with 802.11g. It makes a lot of sense to use 802.11n today, especially given the nominal difference in product cost.
Figure 1.A comparison between the two technologies in a home environment. (Data source: www.octoscope.com)
There are some key techniques used to achieve this performance:
Multiple Input " Multiple Output (MIMO). This is the largest fundamental innovation used in 802.11n. Using multiple radio transmitters and multiple radio receivers it is possible to take advantage of signals received both directly in line of sight and reflections of the same signal.
In 802.11g, reflections created interference and degraded the signal; simple antenna diversity was used to pick which antenna had the cleanest signal. In a MIMO system the reflections can be received by multiple antennas simultaneously, improving the signal-to-noise ratio of the received signal and, in doing so, boosting the effective range.
A technique called spatial multiplexing is used to increase the bandwidth of the signal by transmitting different streams of data in the same band at the same time, but from different antennas. 802.11n allows for up to four spatial streams, and most products shipping today use two.
20MHz & 40MHz Channels.
Standard 802.11g used 20MHz channels, and some proprietary implementations bonded two channels together to double the effective bandwidth. In 802.11n both 20MHz and 40MHz channels are standard.
The combination of two spatial streams, a 40MHz channel and some other overhead optimizations within the MAC layer increase the maximum raw data rate on the wireless link by nearly six-fold, from 54Mbps to an impressive 300Mbps.
As with 802.11g, however, it is not possible to transfer data at the nominal maximum data rate. Overhead in the MAC layer and the characteristics of any given installation result in the actual throughput being much less than 300Mbps.
To maintain backwards compatibility with 802.11 a/b/g networks, the MAC layer must surround 802.11n transmissions with information that can be understood by legacy clients to avoid collisions and interference.
The order of magnitude difference in the data rates for 802.11b/g and 802.11n rates would make the collision avoidance mechanism a huge overhead if it had to be performed prior to each and every 802.11n packet transmission.
In Frame Aggregation, per-packet overhead is reduced by aggregating multiple packets into something called an Aggregate MAC Protocol Data Unit (A-MPDU). Each A-MPDU is treated like a single packet that can be as large as 64Kbytes.
The use of A-MPDUs creates some interesting system design challenges since it trades increased throughput for higher latency. The radio must hang on to packets at the transmitter until it creates an A-MPDU of the desired size or until a timeout value is reached.
Since optimizing this area for particular traffic types is unlikely to be left to the user I won't go into it here, however products that are designed for transporting delay-sensitive traffic need to pay close attention in this area.
2.4GHz versus 5GHz
There is a big push, especially from Microsoft, to separately use 5GHz and 2.4GHz - sometimes referred to as the 'g' and 'a' bands - for video and data, respectively. The motivation for this approach is that there are fewer channels available in the 'g' band, while at the same time, the majority of Wi-Fi is deployed in that band so you are more likely to run into interference.
802.11n allows for operation in both bands. What you will find is that the least expensive product offerings use the 2.4GHz band only; some products will operate in either band; and others will be equipped with two radios and can run in both bands concurrently.
Practically speaking, because the 'g' band is only 80MHz wide, it is usually fairly difficult for an 802.11n AP to actually find and use a 40MHz channel. In the 'a' band it is usually easier to operate 40MHz channels.
For most applications I have found the 'g' band to be sufficient; I have seen a significant improvement in range; and while I don't see the theoretical peak performance, a D-Link DIR-655 is more than adequate when streaming video between PCs or to an XBOX360 or Apple TV.
If you really want to optimize for the maximum possible performance of the network to support multiple simultaneous video streams, you should consider 5GHz. If you already have an 802.11g router you could repurpose that as an access point and run 802.11n in 5GHz or you could opt for a dual-band router.
My approach to getting the most out of 802.11n has not been to simply upgrade everything in the network to 5GHz, but rather to make the most of the bandwidth I can install cost-effectively, and by avoiding a range of things that can cripple performance.
Legacy clients in an 11n network
This is one area that is really misunderstood and perhaps the easiest way to inadvertently bring down the total available bandwidth considerably. When you connect an 802.11g or even an 802.11b client to an 802.11n network it can steal many times its own bandwidth.
Consider an 802.11n network which can achieve at least 80Mbps of throughput to any 802.11n client in that home. Connect an 802.11g client and stream a movie at 10Mbps. How much throughput is left on the network? Most people would say 70Mbps. Nope.
The actual answer is more like 40Mbps. Since the peak performance of the 802.11g client is 20Mbps, it transmits at one quarter the speed of 802.11n. So 10Mbps of throughput to an 11g client is essentially equivalent to 40Mbps to an 11n client. An 802.11b client is even worse. An 11b client carrying just 2Mbps reduces the remaining bandwidth by half!
When wired Ethernet is better
Wireless is a shared medium, so if you're moving data from one wireless client to another, you'll use twice as much wireless bandwidth. This will sound like heresy coming from someone in the wireless business, but when convenient to do so, use an Ethernet cable to connect to the router. Use wireless for mobility and for those hard-to-reach places..
Sorting out traffic
Once the wireless infrastructure is optimized for an installation, the next thing to consider is how to optimize the use of the available bandwidth. This is where quality of service comes in. In Wi-Fi, the core certification to look for is WMM (Wireless Multi-Media).
This requires that certified devices be able to manage four different traffic types: Voice, Video, Best Effort, and Background. The biggest drawback with WMM is that it takes QoS techniques developed for the enterprise and uses them in a home environment.
To work properly, traffic has to be accurately tagged as the appropriate traffic class and must appear in the right proportions. In an enterprise, an IT department ensures this by designing and configuring the network and the applications that run on it. But at home, every network differs dramatically in its equipment, applications, and traffic patterns. Oh, and there's no IT staff.
So in a home environment many things can go wrong with the WMM scheme. The traffic may simply not get a tag in the first place. Or it may be tagged by an application, but then lose that tag due to a poorly implemented network driver, or by passing through some older network gear.
Add to that the fact that for traffic arriving into the home, the service provider treats it all simply as best effort, and you'll begin to see why voice, video and gaming have such a hard time in home networks.
Fortunately, some products on the market do have the ability to do intelligent analysis of the traffic itself and then treat it appropriately even when it doesn't have a tag. I strongly recommend equipment that can do this automatically since life is too short to be spent tinkering to try to make it work reliably for each and every one of your use cases.
At this point I think it is worth mentioning a couple of certifications: IEEE 802.11n draft 2.0 and Wi-Fi Protected Set-Up.
The former ensures that you will see good interoperability between all of network devices, and the latter provides push-button configuration of the wireless network. As more and more devices ship with Wi-Fi, this capability really simplifies adding gadgets to the network.
When Microsoft brought out Vista, they spent a lot of effort on the networking infrastructure and on certifications for network devices. It is in their interest that media works well on the network so the Works with Vista and Certified for Windows Vista logos have some real technology behind them.
They need to make sure that everything handles QoS tags properly and even allowing applications to draw a map of your network and calculate the available bandwidth between two points. Network vendors had to do a lot of work to improve their products and meet these logo requirements so it isn't just a marketing exercise.
Keith Morris, vice president of
marketing at Ubicom has 20 years of experience in
communications and networking and has held key roles at Plessey Crypto,
Fujitsu Microelectronics, MMC Networks.