
For months now, we've been reading predictions that video traffic is about to flood corporate networks. Meanwhile, wireless LANs (WLANs) are quickly becoming employees' default access network. Video traffic consumes significant bandwidth and is sensitive to delay, packet loss and jitter. These metrics are particularly challenging to control in Wi-Fi's interference-prone and shared-access RF environment.
As WLANs and video applications become de rigueur in the enterprise, then, how can network administrators ensure high-quality, reliable performance of multimedia applications? Let's explore this question with Manju Mahishi, Director, Wireless Products Strategy at Motorola Solutions.
What capabilities has the IEEE built into 802.11 standards to help video operate well in RF environments?
First and foremost, the advent of 802.11n technology has significantly improved the handling of video over WLAN. 802.11n supports many inherent enhancements in the PHY and MAC layers resulting in higher throughput and overall reliability of wireless transmissions and is ideally suited for handling video applications.
In addition, 802.11e defines the QoS enhancements for multimedia applications. One of the QoS schemes introduced by 802.11e – Enhanced DCF Channel Access (EDCA) (which the Wi-Fi Alliance calls WMM, or Wireless Multi Media) defines four “access categories” of traffic to which prioritization levels can be assigned: Background, Best Effort, Video and Voice. This enables latency-sensitive voice and video traffic to be prioritized over other traffic in the network. EDCA/WMM basically allows for the tuning of minimum and maximum backoff slots for each stream of traffic – thus creating an advantage for a packet marked with higher priority.
Mapping wired side Layer 2 (802.1p) or Layer 3 (Differentiated Services Code Point, or DSCP) traffic priorities to the video access category enables an end-to-end prioritization.
You mention that there are four "access categories" - Background, Best Effort, Video and Voice.
Can you please explain a bit more here? In particular how do "voice" and "video" differ if both are real-time? Also, does "video" assume that it is "real time" or "non-realtime" (streaming).
As per the WMM spec, each access category (queues in the radios) is characterized by certain parameters that define when the packets in that category are transmitted over the air - essentially controlling the priority. By default, WMM gives higher priority to voice over video.
The WMM parameters can be tuned based on the application requirements - whether it is real-time or non-real-time video. Motorola, for example, distinguishes between interactive and streaming video and gives higher priority to the former. The retry mechanisms are also less stringent for interactive/real-time video. The ability to distinguish between real-time and non-real-time video helps optimize the handling of these applications over the wireless network.
Playing devil's advocate to a certain extent, but, when you say Mapping wired side Layer 2 or Layer 3 traffic priorities to the video access category enables an end-to-end prioritization, doesn't this work only if the end-points are wired?
I can see how this would be helpful, for example, when streaming video from YouTube because the video would be prioritized "half duplex" in coming from YouTube to the device.
But this is streamed - in which case (imho) the performance is not such a big issue - and I'm not sure what this gets me in a full duplex video chat.
The mapping of Layer 2 and Layer 3 priorities works with wireless end points.
In the scenario of wired to wireless traffic, all packets from the wired side are mapped to the appropriate access category (AC) by the wireless access point (AP) and transmitted with the right priority level. For example in the case of a wireless client viewing a YouTube channel, the AP receives video traffic from the wired side marked with the appropriate priority. All those packets are sent to the video queue and will get priority over best effort and background transmissions.
In the case of wireless to wired traffic, the application on the client device should mark the packets with the right priority. Based on that priority, the client will put the video packets in the appropriate video queue and schedule it for transmission. For example, if the application is video chat and those packets are marked appropriately, then they will be queued in the video queue and as per the WMM spec, these packets will get priority over best effort and background traffic.
Every 802.11n client should have WMM enabled by default (per spec). This will ensure end-to-end prioritization for the interactive video chat use case.
Something to note here about mapping L2/L3 priorities is that tagging uplink traffic at L2 (.1p) requires that each AP support VLAN tagging. Not doing so breaks end-to-end L2 QoS.
http://www.aerohive.com/resources/multimedia/QoS_part1.html
http://www.aerohive.com/resources/multimedia/QoS_part2.html
http://www.aerohive.com/resources/multimedia/QoS_part3.html
How do options added by 802.11n, like channel bonding, block acks, frame aggregation, and transmit beamforming, benefit video?
Channel bonding increases the operating bandwidth to 40MHz, thus enabling more capacity for HD streaming. Channel bonding is more useful in 5GHz band where there are more channels available. (Channel bonding in the 2.4GHz band, which has just three non-overlapping channels, is not recommended.)
With frame aggregation, the time between frames is reduced. Also, if every packet is acknowledged, bandwidth is wasted because the ACKs go at the highest basic rate (24Mbps for a/g networks), which is a much lower rate than the data traffic.
Block ACK enables acknowledgment of aggregated frames and saves precious bandwidth. This enables HD video streaming due to more bandwidth being available.
The net effect of beamforming is to improve the overall signal level at the client. Transmit (Tx) beamforming techniques fall into 2 categories: Explicit and implicit. Explicit beamforming requires client side support. Today there is not a significant deployment of clients with Tx beamforming support. Implicit beamforming does not require client support but is suboptimal since there is no feedback from the client.
With real multipath and mobile clients, the Tx beamforming system gain is really only about 2 to 3dB - if it works properly. An AP with high Tx power or a good antenna would suffice in most cases.
Overall, with regards to Tx beamforming, while it certainly has some merits, we do not see it benefiting video a whole lot in a well designed network that provides good signal quality where the clients operate.
Manju is right about TxBF. It's not used at short range because clients are already at max data rates and not used at long range because there are too many reflective paths for the DSP to decode. That means that it's only good at mid range, and only really useful if it's supported on both sides (client and AP). Even if it is supported, the net effect is only about 2-3 dB, which may yield 1 data rate if you're lucky. I see Cisco advertising their proprietary version of TxBF as adding "65%", but that's just marketing spew since that theoretical 65% is a bump of one data rate (e.g. 12 Mbps to 18 Mbps) best case. Until the Wi-Fi Alliance gives us a certification that includes TxBF, it's not worth much. Additionally, using TxBF means that you can't use multiple spatial streams in the current 3-spatial stream silicon (the first generation to support TxBF), so TxBF is only good at slow data rates. Which do you prefer, fast data rates using multiple spatial streams or some tiny bit better signal with a single spatial stream? :)
Devin,
Doesn't MIMO/MRC on the receiver recoup the impact of multipath, whether the signal was beamformed or not, and wouldn't this mean it also extends the higher data rates further out to the edge?