Transporting video over wireless networks - Embedded.com

Transporting video over wireless networks

The wireless home network offers lots of promise for equipment providers, but they must be able to guarantee QoS.

Over the last five years, wireless data networks and the applications that access them have made our lives more convenient and provided us with precious flexibility. Wireless technologies have become so entrenched in our everyday life that they have now permeated into the home entertainment market. While consumers used to watch DVDs or video clips in front of a stationary TV at a single location in the house, they now want to share and access their growing pool of DVRs, mobile devices, laptops, and other entertainment resources anywhere in the home, at any time.

This presents a challenge for today's developers, which is to provide seamless networking for multimedia and entertainment convergence. Consumers want their wireless network and applications to be just as reliable as if they were wired. Unlike transporting general data over wireless networks, however, video applications can't tolerate bandwidth fluctuations, so the challenge is significantly more rigorous. There are many, many considerations to evaluate when pushing video content through a wireless network. Earlier wireless LAN technologies simply weren't up to the task, and the industry responded with the most recent IEEE 802.11n specification. Even this high-performance WLAN standard, however, isn't enough for transporting video, and requires supplementary technologies to provide the kind of high-quality entertainment experience that consumers demand.

A number of critical factors contribute to satisfactory performance as perceived by the consumer. These include bandwidth, latency, breadth of coverage, and quality of service (QoS).

Bandwidth is particularly important and is optimized through the use of MIMO (multiple-input multiple-output) and channel-bonding techniques. These techniques also contribute to QoS, because higher throughput improves immunity to interferences and makes it easier to handle degraded link conditions. In addition, any excess bandwidth can be traded for longer reach and better power efficiency–the more bandwidth, the better.

Not enough throughput
At the same time, though, raw bandwidth from higher PHY throughput isn't enough. What's needed is higher effective bandwidth at satisfactory levels for the given application, and this requires the additional step of substantially increasing media access controller (MAC) efficiency. This can be achieved using an aggregation mechanism that eliminates the overhead linked to each packet and replaces it with a common overhead. Aggregate exchange sequences are enabled with a protocol that acknowledges aggregated MAC protocol data units (A-MPDUs). As a result, there's a single block acknowledgement (Block ACK) instead of multiple ACK signals, and there's no need to initiate a new transfer for every MPDU. The result is a MAC efficiency of 70% as compared with typical 50% MAC efficiency ratings for IEEE 802.11a/b/g, as shown in Table 1.

Another key consideration is how far the network can reach; the gold standard is whole-house coverage. Although users can tolerate “dead spots” and limited coverage when using home data networks, dead spots and limited coverage aren't acceptable for wireless entertainment. Today's centralized multimedia storage devices are expected to serve as the source for all multimedia content, no matter where in the home it's being viewed or heard. This means that, unlike data networks, the bit rate can't drop with increased distance from the access point.

In addition, the use of forward error correction (FEC) schemes extends the reach that's possible at any given data rate. For instance, 3 dB of coding gain derived from using the low-density parity check (LDPC) code translates into about 20% improvement in range. Alternatively, the additional gain can be used to increase throughput (using higher constellation) or to increase the robustness and immunity to interference, as shown in Figure 1. Whole-home coverage at optimal video performance is so critical to wireless entertainment that a key new mandatory test for all wireless entertainment networks should be a dropped-packet performance across the full reach of the typical home environment.

Consider QoS
The final consideration for an optimal wireless entertainment experience is QoS. Several enhanced QoS mechanisms must be used on top of the basic QoS foundation to address some key issues. There are multiple QoS strategies to consider. The first step is to operate in the low-interference 5-GHz band, with its high channel availability and reduced exposure to interferences from other types of equipment operating in the same frequency range. Next, a number of IEEE QoS standards must be employed. These standards mitigate the problems associated with enabling several applications to simultaneously access the same bandwidth without hindering applications that are intolerant to time delays and bandwidth fluctuations.

The existing 802.11n protocols use the distributed coordination function (DCF) access method to address some of these issues, but this isn't sufficient. The DCF protocol implements a “listen-before-talk” scheme based on Carrier Sense Multiple Access (CSMA). Using this scheme, a station first listens to see if the wireless medium is idle. If it's not, the station starts a timer with a random back-off interval that's based on a predetermined range defined by the network parameters. Each station determines individually when to access the medium. Each device has equal opportunity to access the wireless medium, which works well in traditional data applications. But in video, gaming, and other bandwidth-sensitive applications, this “fairness-access” mechanism risks problems with latency and jitter.

Employ the PCF
A better–but still not adequate–QoS approach for wireless entertainment is the point coordination function (PCF), which provides a mechanism for the prioritization of access to the wireless medium. Access is coordinated by one central Point Coordinator (PC) entity, usually the access point (AP). Access to the wireless medium using PCF is given higher priority than medium access based on DCF. Additionally, PCF defines a Contention-Free Period (CFP) and a Contention Period (CP) that alternate periodically over time. The PCF scheme is used for accessing the medium during the CFP, and the DCF mechanism is used during the less-critical CP. During the CFP, there's no contention among stations because stations are polled by the central point coordinator for transmission, and they don't try to access the medium independently. Although this approach better coordinates access, it's a complex implementation, and many technical issues remain unresolved. PCF didn't find its way into real products resulting in further development of the QoS standards.

As a result of the shortcomings of the DCF and PCF methods, the industry has developed the IEEE 802.11e standard. This standard introduces the hybrid coordination function (HCF) for QoS support. The HCF defines two medium-access mechanisms. The first is contention-based medium access, also known as Enhanced Distributed Channel Access (EDCA). The second is controlled medium access (including polling), also known as HCF Controlled Channel Access (HCCA). Like PCF, 802.11e supports the option of two phases of operation (in other words, CP and CFP) for EDCA and HCCA. EDCA is used in the CP only, while HCCA is used in both phases.

EDCA is fairly simple to implement, but it can't guarantee tolerable latency, jitter, or bandwidth levels, and it has no means to handle several applications with the same priority level. HCCA offers significant improvements over EDCA but it, too, is inadequate on its own. HCCA relies on a centralized control in the access point (functioning as the HC, or Hybrid Coordinator) that can guarantee the time and duration of the transmission for each of the connected stations. Every station requests access permission from the central AP, accompanied by a traffic specification that details the required QoS. The access point then determines if it can support the requested QoS specs and admits or denies the station. Because this process is managed from a central location and predetermined upon registration, access is guaranteed to be contention-free, and bandwidth, jitter, and latency are all controlled.

One problem with HCCA is that it can't work with a neighbor legacy network. Alternatively, the best approach is a combined solution based on EDCA with the addition of admission control. EDCA already assures that higher priority packets gain access to the medium sooner and, as a consequence, low-priority services don't hurt the high-priority service's performance. By adding HCCA's admission control, system resources will always be sufficient for two high-priority services, and high-priority service will never hurt the performance of the existing service with the same priority. For example, admission control will evaluate the system's resources for simultaneous video and data services, and only allow a second video stream when there are sufficient resources.

Fast link adaptation

Beyond 802.1e QoS support, system designers can optimize QoS by using fast link adaptation, which, like legacy rate adaptation, is designed to accommodate the transmitted data (PHY) rate with the channel momentary conditions. Legacy rate adaptation included a proprietary open-loop algorithm, where the transmission station optimized its rate according to MAC counters and sophisticated PHY metrics. In contrast, the fast link adaptation is a closed-loop mechanism–the transmitter deduces the optimized rate based on indications from the receiver. The IEEE 802.11n draft standard defines the mechanism for exchanging information between the two stations and allows its implementation to be vendor-dependent. By combining fast link adaptation with rate adaptation, it's possible to achieve a dynamic QoS mechanism that adapts the bit rate based on actual packet-error rate and link conditions. The fast indication can be used by upper layers to take actions and to ensure that the application copes with the available bandwidth. This feature is particularly important in an ever-changing home environment.

There's one last QoS approach to consider, in the area of client-to-client communication. This is managed by the AP using a Dynamic Link Setup (DLS), which saves airtime and increases network efficiency. In home environments, every device should be able to communicate with any other device within the home. Increased network efficiency supports more services while reducing a “hop” through the AP, which improves the performance of delay-sensitive applications. DLS reduces latency because it can support any-to-any device while providing a choice between different connection paths when, for instance, the user changes the channel, rewinds and fast-forwards, or uses gaming commands.

Gil Epshtei is the director of product marketing at Metalink. He holds a B.Sc. in electronic engineering from the Israel Institute of Technology (Technion) and has 15 years of experience in the telecomm industry. Gil can be reached at .

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.