Radio Days - Embedded.com

Radio Days




To read original PDF of the print article, click here.

Break Points

Radio Days

Jack G. Ganssle

Of late, the government has been doing some good things for technologists. But maybe we're doing ourselves a disservice by overspecializing.

“Hi, I'm from the government and I'm here to help you!” We cringe when bureaucrats extend their helping hands–hands that usually take more than they give. Yet in the past few months Big Brother has delighted a lot of its citizens.

In May 2000 the Department of Defense turned off selective availability , the “feature” in the GPS system that deliberately introduced position errors into what is otherwise an amazingly accurate system. The military once worried that giving the world high-accuracy GPS increased the threat of accurate bombing of American cities. With our traditional enemies gone, and with the knowledge that the DOD can indeed turn off GPS to any particular area at any time, selective availability was clearly an anachronism.

I, for one, have always been infuriated that my tax dollars were used for both reducing the accuracy of GPS, and for cheating the system to recover the lost accuracy. For the Coast Guard has long offered differential GPS, a system that restores the lost accuracy by transmitting correction data from sites whose position has been determined with great care.

So now a little $100 navigation set can show your location to better than 15 meters–sometimes much better. This week my unit was showing an estimated position error of just one meter.

Perhaps this new precision will change many of the embedded systems we're building now. Car navigation units, for instance, often watch for abrupt course changes–like turning a corner–to determine that the vehicle is right at a particular intersection. That technique may no longer be needed.

In December 1999, the FCC also gave some of the country's citizens a gift when they amended the license structure for ham radio operators. The ruling affected many aspects of the hobby, but most profoundly, it eliminated the insanely high Morse code speed requirements. Previously, any ham operator wishing to transmit voice in most of the HF bands (under 30MHz) had to pass a code test at thirteen words per minute. The “Extra” license, ham radio's peak of achievement, required twenty wpm.

No doubt lots of you have struggled with the code. Most folks quickly reach a plateau around 10wpm. Getting to thirteen wpm requires an awful lot of effort. With the stroke of a pen the FCC maxed the speed at five wpm for all license grades. Five wpm is so slow–two seconds per character–that anyone can quickly pass the test just by memorizing the dits and dahs.

Ham radio's ranks have been thinning for decades, partly due to the difficulty of passing the code test, and partly due to young people's fascination with computers. In the olden days, ham radio was the Internet of the age; technically oriented people played with radios because computers were unobtainable. Now computer obsession and cheap worldwide communication supplants most folks' willingness to struggle with making contacts on noisy, unreliable HF frequencies.

Though I've had a license for decades, my fascination for working with radios died long ago. It's pretty cool to make contact with someone a continent away, but it's so much easier to pick up the phone or pop out an e-mail. Being busy, I've little time or desire to contact a more or less random person just to chat. Too much like a blind date. I'd rather spend time talking with friends and neighbors.

But when sailing I do find the ham bands useful since it's about the only way to talk to pals on other, distant boats, or to get messages to friends and family ashore. At sea, 1,000 miles from land, that radio suddenly becomes awfully compelling.

Today we're surrounded by radio transmissions, from the diminishing ranks of ham signals, to the dozens of FM-stereo stations in every market, to high-powered AM talk shows, TV stations, Bluetooth-enabled systems chatting with each other, GPS signals beamed from space, and, of course, the ubiquitous cell phone. Wireless is the future. We're wrapped in a dense fog of electromagnetic radiation.

Magazines abound with stories of these wireless marvels, yet I suspect that the majority of embedded developers have little to do with radio-based devices. That's a shame, since the technology underlying radio has much to offer even non-wireless developers.

The guts of radio
Think of that pea soup fog of electromagnetic waves that surrounds the planet. An antenna funnels all of it into your radio, an incredible mush of frequencies and modulation methods that amounts to an ineffable blather of noise. Where is that one rock 'n' roll station you're looking for in all of the mess? How can a $30 portable FM receiver extract ultra-high fidelity renditions of Weird Al's scatological riffs from the noise?

Today's radios invariably use a design called the Superheterodyne, or Superhet for short. Heterodyning is the process of mixing two AC signals to create a third at a lower frequency, which is easier to amplify and use.

The radio amplifies the antenna's output just a bit and then dumps it into a mixer, where it's added to a simple sine wave (produced by what's called a local oscillator ) near the frequency of the station you'd like to hear. The mixer's output is the sum and the difference of the raw antenna signal and the local oscillator's sine wave. Turning the dial changes the frequency of the local oscillator.

Suppose you'd like to hear Weird Al on 102MHz. You set the dial to 102, but the local oscillator might actually produce a sine wave about 10MHz lower–92MHz–resulting in a 10MHz difference frequency and a 194MHz sum (which gets rejected). Thus, the mixer outputs a copy of the station's signal at 10MHz, no matter where you've tuned the dial.

A filter then rejects everything other than that 10MHz signal. The air traffic controller on 121MHz, the trucker's CB at 28MHz, and adjacent FM stations all disappear due to the filter's selectivity. All that other stuff–all of that noise–is gone. More amplifiers boost the signal, another mixer drops the frequency even more, and a detector removes the RF part of the station, putting nothing more than unadulterated Weird Al out the speakers.

But how does the technology behind radio affect embedded systems? I've found it to be one of the most useful ways to eliminate noise coming from analog sensors, particularly from sensors operating near DC frequencies.

Consider a scale, the kind that weighs packages or people. A strain gauge typically interprets the load as a resistance. Feed a bit of current through the gauge and you can calculate weight pretty easily. The problem comes in trying to measure the sample's weight with many digits of resolution–at some point, system noise overwhelms the signal.

Noise has all sorts of sources. That sea of radio signals gets coupled into the strain gauge's wiring. So do distant lightening strikes. The analog-sensing electronics, itself, inherently adds noise to the signal. The challenge is to reduce these erroneous signals to extract as much meaningful data as possible.

In their anti-noise quest, analog designers first round up all of the usual suspects. They shield all sensor wires, twist them together to cancel common-mode signals, and wrap mu-metal (an electromagnetic barrier) around critical parts of the circuit.

When the analog folks can't quite get the desired signal-to-noise ratios, they ask the firmware folks to write code that averages and averages and averages to get quieter responses. Averaging yields diminishing returns (increase the number of sums by an order of magnitude and you only get 50% noise reduction) and eats into system response time. When the system finally gets too slow, we go to much more complex algorithms like convolutions, but consequently lose some of the noise minimization.

None of these approaches are bad. In some cases, though, we can take a lesson from RF engineers and quiet the system by just not looking at the noise. Analog noise is quite broadband; it's scattered all over the frequency domain. We can hang giant capacitors on the strain gauge to create a filter that eliminates all non-DC sources, but at the expense of greatly slowing system response time (change the weight on the scale and the capacitor will take seconds or longer to charge). This sort of DC filter is exactly analogous to averaging.

It's better to excite the gauge with RF, say at 100MHz, instead of the usual DC current source. Then build what is essentially a radio front-end to mix perhaps a 90MHz sine wave with the signals, amplify it as much as is needed, and then mix the result down to a low frequency suitable for processing by the system's A/D converter.

Off-the-shelf chips implement most of the guts of a radio. These offer high “Q” factors, which is a measure of the filter's narrowness. One that passes just 1,000Hz of the spectrum (meanwhile rejecting all other frequencies and thus most of the noise) has a higher Q than one that passes 10,000Hz.

Morse code hung in as a viable communications method for 150 years because it is incredibly bandwidth-efficient and noise-immune. You can filter out all of the RF spectrum except for just a tiny 100Hz slice, and still copy the information intact. Virtually all noise disappears. Voice communication, by comparison, requires at least 3kHz of bandwidth, thus a much lower-Q filter, and makes the system that much more susceptible to noise.

The scale is Morse-like, since the data changes slowly. The high-Q filter yields all of the information with almost none of the noise.

A radio design also greatly simplifies building very high-gain amplifiers–your FM set converts mere micro-volts out of the antenna into volts of speaker drive. It further removes the strain gauge's large DC offset from its comparatively small signal excursions.

Another example application is a color-measuring instrument. Many of these operate at near-DC frequencies since the input sample rests on the sensor for seconds or minutes. High resolution requires massive noise reduction. The radio design is simple, cheap (due to the many chip solutions now available), and quiet.

Many eons ago I worked as a technician on a colorimeter designed in the '60s. The design was quite fascinating as the (then) high cost of electronics resulted in a design that mixed both mechanical and electronic elements. The beam of light was interrupted by a rotating bow-tie shaped piece of plastic painted perfectly white. The effect was to change the DC sensor output to about a 1,000Hz AC signal. A narrow filter rejected all but this one frequency.

The known white color of the bow-tie also acted as a standard, so the instrument could constantly calibrate itself.

The same company later built devices that measured the protein content of wheat by sensing infrared light reflected from the sample. Signal levels were buried deep into the noise. Somehow we all forgot the lessons of the colorimeter–perhaps we really didn't understand them at the time–and slaved on every single instrument to reduce noise using all of the standard shielding techniques, coupled with healthy doses of blood, sweat, and tears. No other technical problem at this company ever approached the level of trouble created by millivolts of noise. Our analog amplifiers were expensive, quirky, and sensitive to just about everything other than the signal we were trying to measure.

Years of struggling with the noise in these beasts killed my love of analog. Now I consider non-digital circuits a nuisance we have to tolerate to deal with this very analog world.

The murky future
It scares me that we could have learned so little from the rotating bow-tie. I'm worried as well that increasing specialization reduces cross-pollination of ideas even from within the same industry. The principle behind radio eludes most embedded folks despite its clear benefits.

Knowledge is growing at staggering rates, with some pundits predicting that the total sum of information will double every decade before long. At this year's Embedded Executive Conference Regis McKenna played down this issue since machines will manage the data.

I'm not so sanguine. Skyrocketing knowledge means increasing specialization. So we see doctors specializing in hand surgery, process engineers whose whole career is focused on designing new Clorox bottles, and embedded developers who are experts at C, but all too often they have little understanding of what's going on under the hood of their creations.

A generalist, or at least an expert in a field who has a broad knowledge of related areas, can bring some well-known techniques–well known in one field–to solve problems in many other areas. Perhaps we need an embedded Renaissance person, who wields C, digital design, and analog op amps with aplomb. The few who exist are on the endangered species list as each area requires so much knowledge. Each can consume a lifetime.

The answer to this dilemma is unclear. Perhaps when machines do become truly intelligent they'll be able to marshal great hoards of data across application domains. Our role then seems redundant, which will create newer and much more difficult challenges for homo sapiens.

I hope to live to see some of this future, but doubt that we'll learn to deal with the implications of our inventions. Historically, we innovate much faster than we adapt.

Jack G. Ganssle is a lecturer and consultant on embedded development issues. He conducts seminars on embedded systems and helps companies with their embedded challenges. He founded two companies specializing in embedded systems. Contact him at .

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.