We often take wireless communications for granted, without considering the underlying complexity and technological barriers. It's worth taking a closer look at those aspects.
I was on a plane recently, sitting across the aisle from a young lady, who was demonstrably proud of her Palm VII, which she was showing off to the flight attendant. As the plane was getting ready to take off, I noticed that they didn't make her put it away. I asked her if that was normal. “I probably should,” she said, “but I am not on the Internet. I am just reading my e-mail.”
This perplexed me, since she was actively downloading her e-mail at the time. I asked her if she realized that the device was very similar to a cell phone. “Oh, you are mistaken,” she replied, somewhat condescendingly. “Cell phones use WAP. This uses HTML.”
I realized several things simultaneously. First of all, she had no concept of how the device worked. Secondly, her ignorance did not stop her from being able to use it. Finally, there was a chance that I was not going to survive the flight as a result.
I have seen other less extreme examples of misunderstanding the underlying architecture of wireless communications. In the interest of self-preservation and in the hope that programmers will write better applications for wireless devices, I provide this simplified explanation of the complexity beneath the beauty.
Signals in space
The most basic level of communications is the capability to send a signal from one place to another. When a wire runs between two devices, they simply have to agree on voltage levels. When there is no wire, things get a little more complicated.Journey back with me to the days of Physics 101. Specifically,remember the section on electricity and magnetism. Signals will emit waves when they are changing. This means they can transmit data, but it has to be done differently than wired communications. A constant voltage on a wire can be read at any time. The same effect without the wire requires a constantly changing signal. See Figure 1.
Figure 1: Wireless transmission requires a constantly changing signal
This is a basic concept, but it is important to the understanding of how wireless data communications works. Wired communications are based on mappings of voltage levels to symbols. For example, 5V may indicate the binary symbol “1,” while 0V indicates the symbol “0.” The signaling on wireless communications involves manipulating the characteristics of a sinewave in such a way that messages can be sent on it.
Three relevant characteristics of sinusoidal signals can be manipulated to communicate. They are amplitude (how loud the signal is), frequency (how fast is it wiggling), and phase (where is it in the sine curve). Each of these can be used, but each has its advantages and disadvantages.
Amplitude modulation (AM) is relatively simple to produce. You simply change the broadcast power to indicate different symbols. This is conceptually similar to the voltage level changes in wired communications, and is shown in Figure 1. It worked nicely for analog communications such as commercial radio in its early days.
The problem with amplitude modulation is that amplitude will also change due to a number of factors. Imagine trying to send voltage levels through a wire that intermittently and arbitrarily changed its resistance in different sections of the wire. That is what wireless is all about. Analog communications can get away with this; we are all familiar with the bursts of static that are typical on car radios. Unfortunately, digital communications are much less forgiving of such disruptions.
Frequency modulation (FM) represented a major improvement in commercial radio. Changing the frequency is less susceptible to noise, especially if the range of frequencies is relatively close. Signals that are relatively close in frequency tend to propagate similarly, so it is much easier to pick up such a signal on the receiving side and extract the information in a relatively accurate way. As a result, FM radio quickly became the preferred way of sending music.
FM works very well as long as there is abundant bandwidth; unfortunately, it is relatively inefficient as far as information per unit of spectrum. Each signal must completely occupy a much wider chunk of the spectrum than it would if it were allocated to a single frequency.Phase modulation (PM) is a bit more esoteric than the other two alternatives. The idea is to skip around in the phase of the output signal to indicate different symbols. The signal will still transmit, because changes in voltage are still occurring, and the signal now occupies a much narrower slot in the precious frequency spectrum.
Another interesting thing about PM is that it is inherently much more digital than the AM or FM alternatives. Each of those can be modified easily in a continuous analog spectrum. PM changes are much more abrupt, making it less capable of carrying analog signals.
PM communication has been heavily utilized for digital communications. It is used by the 802.11 standards, for example. A sequence of “keying” protocols is used to indicate the transmission of data using PM. Communications gurus speak knowingly of BPSK, QPSK, and M-PSK. Each of these is simply a convention for encoding digital data into a signal using phase-modulated communications. BPSK is binary phase shift keying, where there are two possible states; QPSK is quadrature phase shift keying where there are four; and M-PSK is multilevel PSK, where there might be a higher number, depending on how well the transmission is being received.
AM and FM communications also have such coding schemes for digital data. QAM refers to quadrature amplitude modulation, and BFSK means binary frequency shift keying. This rat's nest of acronyms is one of the things that gets in the way of understanding wireless communications. I'm probably breaking union rules here, but hopefully this will demystify what's really going on here.
Why it doesn't always work
It seems pretty straightforward so far. We can send signals wirelessly, and we can send information on those signals. Life is good.
Now we get to the messy part. One of the nice things about software is that if we can get it to work once, it will probably always work. Software exists in this digital wonderland where everything always happens predictably. One plus one always equals two. Of course, software is complex, but when I hear software people complaining about the complexity of maintaining dozens of tasks on a system, I think about the problems of wireless communications.
Wireless communication is not something that didn't occur until we discovered it. We swim in a veritable sea of electromagnetic waves generated by the sun, fluorescent lights, and even our own bodies. Most of these are negligible in and of themselves, but they sum to create background noise in the wireless spectrum. This is referred to as additive white Gaussian noise.
This background noise conspires with path loss (see the next section) to limit the range of wireless communications. When the level of the broadcast signal falls to near the level of this noise it becomes difficult to extract the signal. This effectively puts a floor on the power of a signal, below which it is very difficult to do anything useful.
In general, the further apart the transmitter and the receiver are, the harder it is to send a signal between them. This concept is pretty intuitive, but the reasons are not as simple as you might think. Signals do not get tired traveling the distance from the transmitter, but they do tend to spread over a larger area. For example, the apparent power of a signal from a point source will decrease by the square of the distance simply because it is expanding in a circle. The power gets spread over a circle centered at the broadcast source. This is referred to as large-scale path loss and is similar to the dimming effects of distance on a point source of light (think of a street light).
Other factors can introduce loss as well. Anything in the line of sight between the transmitter and the receiver can block all or part of the signal. The effect is made even more interesting by the fact that different frequencies are blocked to varying levels by particular materials. A path that is relatively transparent to a microwave signal may be completely blocked to an infrared signal.
Consequently, particular frequency ranges are better suited to some types of communications than others. For example, very low frequencies are used to communicate with submarines. Long wavelengths do a much better job penetrating the water than microwaves. In fact, microwave ovens work at around 2.4GHz precisely because that is the frequency that interacts the most with water-the better to heat up your food!
Objects can introduce other problems as well. Not only do they block signals, but they also reflect them. On one hand, this is a blessing, because it is possible to receive reflected signals in places you would not expect. For example, reflections can allow reception in between office buildings or under overpasses that completely block a direct signal.The problem that occurs when signals arrive via this kind of complex bank shot is that the length of the path can vary tremendously. The parts of a signal that reach a receiver may vary significantly in phase.
The effect is most familiar in analog television signals that have ghost images offset a few inches to the right. Ghosting is much less prevalent in these days of cable and satellite TV but is quite common with signals received over the air. The most common cause is a mountain behind the receiver (from the point of view of the transmitter) that causes a weaker copy of the signal to arrive later than the original.
Multipath effects are a major problem for digital communications. They are a major factor limiting the throughput of wireless communications. The real difficulty is when the delay causes previous symbols to interfere with the one currently being processed. These data ghosts are referred to as inter-symbol interference (ISI). If the data rate is increased, the symbols come closer together and ISI becomes even more of a problem.
Much of the signal processing in modern digital wireless systems involves compensating for ISI. The calculations are complex and arcane and eat up a lot of processor cycles. Consider the dynamics of a cell phone signal being received on a busy highway. Not only is the phone itself moving fast, but it is surrounded by cars and trucks that reflect signals every which way. It is amazing that it is possible to use a cell phone on the highway, setting completely aside the question of whether or not it is a good idea.
In some ways, interference is the simplest problem. It has been compared to the situation of people at a cocktail party carrying on a series of conversations at once. They all share the same communications spectrum, but somehow they still manage to exchange information. Most of the communications spectrum is tightly regulated. This allows critical functions like emergency communications and commercial operations like television and radio to operate relatively free from contention from other man-made sources. Anything that emits radiation in these heavily regulated portions of the spectrum must pass inspection, even if that radiation is unintentional.
Devices that intend to use this spectrum must also pass inspection and may require licensing, which involves not only validation of appropriate power levels on broadcasts, but also adherence to protocol standards. FCC validation of a device is a significant milestone in the development of any complex electronic device to be sold into the mass market, whether that device intends to communicate or not.
Portions of the spectrum are less regulated. These are the so-called industrial, scientific, and medical (ISM) bands. They allow specialized devices in these fields, as well as consumer devices like portable telephones and garage door openers, to be created without the heavy validation of the restricted frequency ranges.
Note that the ISM bands are not a complete free-for-all. Strict limits are placed on the maximum power and range of broadcasts in these frequencies. The factors, such as fading, that restrict the range of communications are what make this use of the spectrum possible. For example, the 802.11 wireless Ethernet standard is based on the 2.4GHz ISM band, as is the DECT portable telephone standard. As you might imagine, it's tough to use both of these types of devices at the same time, but that usually is not a problem since the former is mostly in office buildings and the latter is in homes. Of course, the popularity of wireless devices is beginning to cause these collisions to happen more frequently. My house has an 802.11b wireless network, a DECT phone, and a microwave oven that all use this frequency range. Granted, I am a serious early adopter, but this problem is only going to get worse in the future.
Coding to increase reliability
After going over factors like those presented in the previous section, we start to understand why wireless communications can be so difficult. In fact, we start wondering how it ever works at all.
The error rate in wireless communications is much higher than their wired equivalents. This is a key point that is often ignored by programmers who insist on treating wireless devices like wired ones. Successful applications in this genre must have some recognition of the rapidly changing nature of the medium.
Wired communications typically force a retransmit of information based on the corruption of as little as a single bit of the transmitted information. This approach applied to wireless communications would result in overwhelming numbers of retransmissions. Something different has to be done.
The philosophy of wireless communications has been to treat it as an unreliable transmission medium. The emphasis is on error prevention, rather than simply detection. One of the ways to do this is to encode the information sent over a wireless link. An 8-bit number will typically be encoded into a 12- or 16-bit number. The trick is to spread the 255 values of the data byte as widely as possible over the larger range of values. The receiver then decodes the received value into the equivalent data byte that is closest to the value.An example of such an encoding scheme is shown in Table 1. Any single bit in the encoded values can change without causing it to decode into a wrong data value. The number of bits that can change without this happening is called the Hamming distance between the encoded values. The trick is to balance factors like Hamming distance, required data throughput, and the complexity of the decoding algorithm against each other to achieve acceptable throughput and reliability of communications.
|Table 1 A symbol set with a Hamming distance of 2|
|Examplesequence of encoded number 0 through 15|
|0: 0000000||4: 0100111||8: 1000101||12: 1100010|
|1: 0001011||5: 0101100||9: 1001110||13: 1101001|
|2: 0010110||6: 0110001||10: 1010011||14: 1110100|
|3: 0011101||7: 0111010||11: 1011000||15: 1111111|
The decoding algorithm is not to be taken lightly. Wesel's book has a very good overview of Viterbi decoding and other techniques used to improve significantly the error rate of wireless digital communications.1 This algorithm is one of the underlying technologies that allows wireless communications to be a viable vehicle for applications, but the nicest part is that it is generally built into the communications protocols. It is interesting to read about if you are into such things, but it can also be ignored as a problem that others have already solved for you. The key point to understand is that error-free communications can take place, even if some of the data gets hosed in the process because of the redundancy of the encoded data.
Encoding data helps reduce the error rate, but it still would be too high if not for a technique that was developed just prior to World War II to control torpedoes. Believe it or not, the concept was thought up during a cocktail party by the actress Hedy Lamarr and a composer named George Antheil. They had the foresight to patent it, although the patent expired long before it could be used for digital communications.
The basic idea is to broadcast a signal over a range of frequencies. The technique was envisioned primarily as a security measure, since the signal would be difficult to jam or intercept, but it turns out to also be a very elegant way to send robust signals at much lower power levels than would otherwise be necessary. This technique builds nicely on the encoding described in the previous section, since the redundancy combines well with spreading the signal across multiple frequencies. The result is a signal that begins to approach the cleanliness required to build a digital data path.
There are two approaches to implementing this idea, each of which is in use in current RF standards for digital data. Let's take a look at them.
Frequency hopping is the simplest implementation of the idea of spreading the signal across the spectrum. The concept is illustrated in Figure 2, which shows two separate ways of doing frequency hopping spread spectrum (FHSS).
Figure 2: Frequency-hopping spread spectrum (FHSS)
The first of these sends three bits of the data packet at one frequency. The transmitter then switches to the next predetermined frequency and the receiver does the same. The next group is sent, and the frequency hops again. This is shown in Figure 2a. This technique is used for the FHSS flavor of the 802.11 networking standard.
The second way of implementing this technique actually makes hops faster than it sends the data. The result is that one data bit will be sent sequentially across several frequencies. This is shown in Figure 2b. This technique is used for Bluetooth, which frequency hops 1,600 times each second.
The reasoning behind the faster scheme is fairly simple. As noted earlier, signals sent on different frequencies will be received at different levels of quality at any point in time. Frequencies that pass a clear signal at one moment may become completely blocked in the next. FHSS allows the data to be broken up sequentially and sent via multiple paths. This greatly improves the possibility that most of it will get through.
The tricky part of implementing this technique is that the transmitter and receiver must switch simultaneously to a new frequency (preferably the same one). Within a shared network environment the various devices sharing the bandwidth should also make sure that any two or more transmitters don't switch to the same frequency at the same time. This real-time juggling act is generally the responsibility of a central network hub.
Direct sequence communication is a little more complex than frequency hopping. The idea is to transmit a signal across a wide band of frequencies at a much lower power per frequency. The twist is that before broadcasting it, the signal is XORed with a number called the chipping sequence. As a result, each bit will become at least 11 bits in commercial systems. (Military systems may multiply the signal by as much as 1,000 chip bits for increased security.) The resulting signal is then broadcast at low power across a range of frequencies.
This may sound like a tremendous waste of bandwidth, but you must realize that multiple broadcasters can simultaneously use those same frequencies. The key is the chipping sequences. Assuming the transmitter and receiver are using the same sequence, the receiver will be able to recover most of the transmission. Again, in RF digital communications the receiver matches the signal to the closest possibility (in the Hamming distance sense) rather than using it precisely. Assuming that multiple transmitters are not using the same chipping sequence (a big no-no), the result will be data extracted magically from what appears to be random background noise. This sequence of events is illustrated in Figure 3.
Figure 3: Direct sequence spread spectrum (DSSS)
This technique is used in the 802.11b variant of the 802.11 wireless networking standard, which has a data throughput of 11Mbps. It is also a key feature of the CDMA standard for cellular service.
It's an analog world
We have covered a lot of ground in this article, though admittedly at a pretty high level. We now have the pieces in place to communicate in the nastily analog RF world with enough reliability to create a reasonably error-free digital data path out of it. A lot of complexity is inherent in the implementation of the ideas I have described here. Consult the list of references at the end for more information on specific techniques.
The best part of the pyramid of techniques I have described is that most of the time it just works. It is possible to do many useful things using wireless digital communications without having to understand anything I have described here. The young lady I mentioned at the beginning of this article was quite capable of getting her e-mail, although I am quite sure she has no concept of the complexity of the technology she is wielding. espLarry Mittag is the chief technologist for Stellcom. Larry is also a columnist for Communications System Design and a contributing editor of Embedded Systems Programming. Contact him at .
1. Wesel, Ellen Kayata. Wireless Multimedia Communications. Reading, MA: Addison-Wesley, 1998.
2. Schiller, Jochen. Mobile Communications. Reading, MA: Addison-Wesley, 2000.
3. Goodman, David J. Wireless Personal Communications Systems. Reading, MA: Addison-Wesley,1997.4. Viterbi, Andrew J. CDMA: Principles of Spread-Spectrum Communication. Reading, MA: Addison-Wesley, 1995.