# The basics of DSP for use in intelligent sensor applications: Part 1

In earlier articles on intelligent sensor design, we saw how valuable they can be to both end users and those who manufacture and sell them. It’s now time to delve more deeply into what it takes to make intelligent sensors work.

The first step in that journey is to develop a solid, intuitive understanding of the principles of digital signal processing(DSP). Unlike many introductory DSP articles and texts, the focus here will be on presenting and using the important concepts rather than deriving them, for the simple reason that addressing the subject in depth is a book-sized, not a chapter-sized, project.

Other authors have already done an excellent job of addressing the topic in a more rigorous manner,1 and our goal here is not to try to condense their work to meaningless bullet points but rather to understand how to use certain key concepts to turn raw sensor data into meaningful sensor information.

By the end of this series, the reader should be comfortable identifying the key signal processing requirements for typical applications and be able to determine the appropriate process for extracting the desired measurements.

Although this discussion of DSP isn’t as rigorous as most academic treatments of the subject, it’s essential that we establish a clear understanding of several key concepts that form its foundation.

Beginning with precise definitions of what we mean when we refer to “signals” and “noise,” the discussion moves into the analysis of signals in both the time and frequency domains and concludes with an introduction to filtering, a technique that is commonly used to extract the desired information from noisy data.

**What We Mean by Signals and Noise**

The dictionary defines the term “signal” as “an impulse or fluctuating electric quantity, as voltage or current, whose variations represent coded information,”2 and this definition serves well as a starting point.

One interesting characteristic of electronic signals is that they operate under the principle of superposition. This principle states that the value of two or more signals passing through the same point in the same medium at a particular point in time is simply the sum of the values of the individual signals at that point in time.

For example, if we had N different signals, the resulting signal is the superposition of the N signals and would be represented mathematically as:

It turns out that the principle of superposition is a very powerful tool; using it, we can often deconstruct complex sensor signals into separate, more basic components, which may simplify the analysis of the problem and the design of the resulting system.

Many real-world sensor examples make extensive use of this principle in the creation of the appropriate signal-processing techniques for each specific application, but first let’s examine one way in which superposition leads to a better understanding of all sensor signals.

Consider the circuit shown in Figure 2.1a below, which contains a thermocouple connected to a voltmeter in an idealized environment. As discussed in earlier articles, the thermocouple produces an analog output voltage V_{T} (t) that varies over time t with the temperature of the thermocouple junction.

In this case, the measured signal V_{M} (t) is simply the “true” signal V_{T} (t) and the information coded in it is the temperature of the thermocouple junction.

Figure 2.1a. Basic Idealized Thermocouple Circuit |

Unfortunately, such an idealized environment exists only in our imaginations, much like a perfectly silent library exists only in a librarian’s fantasy. Just as even the quietest library has some audible noise, real-world circuitry contains electronic noise that comes from both the surrounding environment and the components used to create the circuit.

Thus, a more accurate representation of the basic thermocouple circuit would include an electrical noise generator that produces a noise voltage component VN(t) superimposed on the “true” thermocouple signal V_{T} (t), as is shown in Figure 2.2a below.

Figure 2.2a. More Realistic Thermocouple Circuit Model with Noise |

To an outside observer, this distinction is not actually discernible; they simply see the measured voltage V_{M} (t) that contains both components and is equal to:

Of course, the end user generally doesn’t want the noise component to be a part of the measured signal; after all, he’s interested in the “true” thermocouple signal that has the information of interest, not a corrupted signal that distorts that information.

Depending upon the characteristics of the signal of interest and of the noise, it can be possible to accurately extract the signal of interest even in the presence of significant levels of noise using the techniques to which we will now turn.

**Viewing Signals in the Frequency Domain**

Any real analog signal can be represented in the frequency domain via a mathematical operation known as the Fourier transform, and the proper choice of domain (either the time domain, which is what we measure using an oscilloscope or voltmeter, or the frequency domain) can greatly simplify the analysis of a particular signal-processing situation.

The basic premise of the Fourier transform is that continuous, linear4 time-domain signals (like the voltages we’re measuring in the examples above) can be accurately represented by the superposition of orthogonal sinusoidal signals of varying frequencies.

That’s a tremendous amount of fairly technical mathematical jargon, but what’s valuable about this operation is that it allows us to fairly easily determine the frequencies in which most of the energy of the signal occurs, which is essentially telling us what the most important parts of the signal are.

Figure 2.3a. Time-Domain Sinusoidal Signal |

An example may help clarify the point. Consider the purely sinusoidal signal in Figure 2.3a above, and its frequency-domain counterpart in Figure 2.3b below. Note the very interesting relationship between the two domains: a continuous signal in the time domain actually maps to two spikes in the frequency domain symmetrically distributed along the frequency axis!

Figure 2.3b. Frequency-Domain Representation of the Same Signal |

Although the graph in Figure 2.3b shows two peaks, the spread of the frequency spectrum is due to limitations of the discrete mathematics used by the program that generated the image.

Figure 2.4a. DTMF Time-domain Signal |

Mathematically, in the frequency domain there are two spikes located exactly on the sinusoid’s frequency. If we construct a more complex signal by adding a second sinusoid to the first, the principle of superposition tells us that we might get a new signal that looks like that shown in Figures 2.4a above and 2.4b below.

Figure 2.4b. DTMF Frequency-domain Representation |

Here we see a standard DTMF (dual tone multifrequency) signal, just as you might get if you punched a digit on your touchtone phone. The addition of a single extra frequency has caused the signal to lose much of its sinusoidal appearance in the time domain, but the same signal in the frequency domain is simply four spikes, with the two additional spikes corresponding to the new frequency.

Figures 2.4a and 2.4b demonstrate a very powerful aspect of the Fourier transform: the superposition principle holds in both the time and the frequency domains. Signals that are added together in the time domain have a frequency spectrum that is the sum of the spectra5 of the individual signal components.

Simply by viewing this signal in the frequency domain, the designer can rapidly identify its constituent parts, which will be of great use in analyzing and designing the processing required to extract the information of interest. This concept of spectral analysis, the analysis of the frequency domain representation of a signal, is a powerful one that will find application in many real-world examples.

Two terms that often arise when performing spectral analysis are frequency band, which simply means a continuous range of frequencies, and bandwidth, which generally refers to the highest frequency component in a signal.

For example, in Figure 2.4b, the designer might be interested in the frequency band from 770 Hz to 1477 Hz, which contains the two frequencies that make up that particular DTMF signal. Since 1477 Hz is the highest frequency signal component, the theoretical bandwidth for the DTMF signal is 1477 Hz.

There is one additional aspect to the time-domain–frequency-domain representation issue that is important to understand, and that is the fact that rapidly changing signals in the time domain generate a broader corresponding spectrum in the frequency domain, while slowly changing signals produce a narrower overall frequency spectrum limited to lower frequencies.

As we’ll see in the next section, sensor designers can use this fact to determine the optimal approaches to removing noise from the signals of interest.

Figure 2.5a. Low-frequency Content Signal in the Frequency Domain |

Figure 2.5b. High-frequency Content Signal in the Frequency Domain |

Figures 2.5a and 2.5b above, and 2.5c below show frequency domain representations of low-frequency (slowly changing), high-frequency (rapidly changing), and broadband (low- and high-frequency) signals.

Figure 2.5c. Combination of Low- and High-frequency Content Signal in the Frequency Domain |

In practice, signal distortion will spread the actual DTMF bandwidth somewhat beyond the 1477 Hz theoretical value.

Next in Part 2: **Cleaning up the signal – introducing filters**

**Creed Huddleston** is President of Real-Time by Design, LLC, specializing in the design of intelligent sensors, located in Raleigh-Durham, North Carolina.

This series of articles is based on material from ”Intelligent Sensor Design” by Creed Huddleston, with permission from Newnes, a division of Elsevier. Copyright 2007. For more information about this title and other similar books, please visit www.elsevierdirect.com.