# The basics of DSP for use in intelligent sensor applications: Part 2

We’re all familiar with the general idea of a filter: it removes something that we don’t want from something we do want. Coffee filters that pass the liquid coffee but retain the grounds or air filters that pass clean air but trap the dust and other pollutants are two common examples of mechanical filters in everyday life.

That same concept can be applied to noisy electrical signals to pass through the “true” signal of interest while blocking the undesirable noise signal.

Looking at Figure 2.5c below, imagine for a moment that the signal of interest is in the lower-frequency region and that the noise signal is in the higher-frequency region. Ideally, we’d like to be able to get rid of that high-frequency noise, leaving just the signal component that we want.

Figure 2.5c. Combination of Low- and High-frequency Content Signal in the Frequency Domain |

We can picture the process that we’d like to perform as one in which we apply a mask in the frequency domain that passes all of the low-frequency signal components without affecting them at all but that zeros out all of the high-frequency noise components.

Graphically, such a mask might look like the frequency spectrum shown in Figure 2.6a below.

Figure 2.6a. Example Frequency Mask |

If we multiply each point in the graph of Figure 2.5c by the corresponding point in the graph of the mask in Figure 2.6a, we get the resulting frequency spectrum shown in Figure 2.6b, which is precisely what we want.

Figure 2.6b. Result of Multiplying Mask in 2.6a with Spectrum in 2.5c |

Thought experiments like these are helpful, but is it possible to implement this in the real world? The answer is “yes,” albeit with some important qualifications that arise from deviations between real-world and idealized system behavior.

Before we get into those qualifications, though, let’s take a look at an important foundational concept: sampling.

**Sampling the Analog Signal**

Sensor signals are inherently analog signals, which is to say that they are continuous in time and continuous in their value. Unfortunately, processing analog signals as analog signals requires special electronic circuitry that is often difficult to design, expensive, and prone to operational drift over time as the components age and their properties change.

A far better approach is to convert the input analog signals to a digital value that then can be manipulated by a microprocessor. This technique is known as analog-to-digital conversion, or sampling.

Figure 2.7a. Example of a Continuous-time Voltage Signal |

Figure 2.7a above shows an example of a continuous time voltage signal, and Figure 2.7b below shows the sampled version of that signal. One key concept that can sometimes be confusing to those who are new to sampled signals is that the sampled signal is simply a sequence of numeric values, with each numeric value corresponding to the level of the continuous signal at a specific time.

Figure 2.7b. Corresponding Sampled Version of the Signal in Figure 2.7a |

For a sampled signal such as that shown in Figure 2.7b, the signal is only valid at the sample time. It is not zero-valued between samples, but the convention for presenting sampled data graphically is to display the sample values on a line (or grid), with the X-axis denoting the parameter used to determine when the data is sampled (typically time or a spatial distance).

Another convention is to associate sampled signal values in a sequence using an index notation. In this scheme, the first sample of the signal x(t) would be x0, the second sample would be x1, and so on.

If we add two signals x and y, then the resulting signal z is simply the sample-by-sample addition of the two signals:

Sampling has two important effects on the signal. The first of these effects is what’s known as spectral replication, which simply means that a sampled signal’s frequency spectrum is repeated in the frequency domain on a periodic basis, with the period being equal to the sampling frequency.

Figure 2.8a. Example Analog Signal Frequency Spectrum |

Figures 2.8a above and 2.8b below show an example of the frequency spectrum of an example signal and the resulting frequency spectrum of the sampled version of the signal.

Figure 2.8b. Corresponding Frequency Spectrum of the Sampled Signal |

As one can easily see, a problem arises when the highest frequency component in the original signal is greater than twice the sampling frequency, a sample rate known as the Nyquist rate.

In this case, frequency components from the replicated spectra overlap, a condition known as aliasing since some of the higher frequency components in one spectrum are indistinguishable from some of the lower frequency components in the next higher replicated spectrum.

Aliasing is generally a bad condition to have in a system and, although the real world precludes eliminating it entirely, it is certainly possible to reduce its effects to a negligible level.

Let’s look at a simple example to illustrate how aliasing can fool us into thinking that a signal behaves in one way when in reality it behaves totally differently. Imagine that we are sampling the position of the sun at various times during the day over an extended period of time.

Figure 2.9a. Sun’s Position Sampled Every 1.5 Hours |

Being good scientists, we want to verify that our sampling rate really does make a difference, so we decide to take two sets of measurements using two different sampling rates.

The results from the first set of measurements, which employ a sampling rate of once every 1.5 hours, are shown in Figure 2.9a above. As we would expect, the measurements show that the sun proceeded from east to west during the course of the experiment.

Now take a look at the results from the second set of measurements, which have a sampling rate of once every 22.5 hours in Figure 2.9b below.

From the data, we can see that the sun appears to move from west to east, just the opposite of what we know to be true! This is exactly the type of error one would expect with aliasing, namely that the signal characteristics appear to be something other than what they really are (hence the term aliasing).

Figure 2.9b. Sun’s Position Sampled Every 22.5 Hours |

**Low-pass Filters**

We’ve seen an example of the first type of filter, the low-pass filter, which passes low-frequency components and blocks high-frequency signal components. An idealized example of a low-pass filter is shown in Figure 2.10 below, in which the passband, the frequency range of the signal components that are passed, is 1500 Hz wide. Note that the bandwidth in this case is also 1500 Hz, since that’s the highest frequency component of the filter.

Figure 2.10. Idealized Low-pass Filter with a Bandwidth of 1500 Hz |

Low-pass filters are probably the most widely used type of filter for the simple reason that, in the real world, we don’t deal with signals of infinite bandwidth.

At some point, the frequency content of a signal drops off to insignificance, so one of the most common approaches to noise reduction is to establish some limit for the frequency components that are considered to be valid and to cut off any frequencies above that limit.

For example, when we are using thermocouples to measure temperature, the thermocouple voltage can change only so quickly and no faster because the temperature of the physical body that is being monitored has a finite rate at which it can change (i.e., the temperature can’t change discontinuously).

In practice, this means that the frequency components of the temperature signal have an upper bound, beyond which there is no significant energy in the signal.

If we design a low-pass filter that will cancel all frequencies higher than the upper bound, we know that it must be killing only noise since there are no valid temperature signal components above that cutoff frequency.

**High-pass Filters**

A complement to the low-pass filter is the high-pass filter, which passes only high-frequency signal components and blocks the low-frequency ones. In the idealized high-pass filter of Figure 2.11 below, the passband starts at 1500 Hz and continues to all higher frequencies.

Note that the bandwidth in this case is infinite since all frequencies starting with the passband are included in the filter.

Figure 2.11. Idealized High-pass Filter with Passband Starting at 1500 Hz |

Since we just stated that no real-world signal has infinite bandwidth, why would we want to use a filter that seems to assume that condition? In some cases, the signal we’re measuring is an inherently AC signal; by the nature of the system anything below a certain frequency is obviously noise because no valid signal components exist below that frequency.

An example of this might be the auditory response of the human ear, which is sensitive only to frequencies in the range of 20 Hz to about 20 kHz. Anything below 20 Hz is of no practical value and can be treated as noise.

**Bandpass Filters**

A bandpass filter is essentially the combination of a high-pass filter and a low-pass filter in which the passband of the high-pass filter starts at a lower frequency than the bandwidth of the low-pass filter, as shown in Figure 2.12 below. Here we see that the filter will pass frequencies between 750 Hz and 1500 Hz while blocking all others.

Figure 2.12. Idealized Bandpass Filter |

Bandpass filters are used whenever the designer wants to look at only a particular frequency range. A very common example of this is the tuner in a radio, in which the tuner uses a bandpass filter with a very narrow passband to isolate the signal from an individual radio station.

With the tuner, the goal is to pass the signal from the station of interest as clearly as possible while simultaneously attenuating the signals of all other stations (presumably at lower or higher frequencies) to the point where they are inaudible.

Bandpass filters are also commonly used to look at the strength of the signal in certain passbands of interest. DTMF detectors use this principle to determine what key a person has pressed on their touchtone phone.

In a DTMF system, each key is represented by a combination of two and only two frequencies that have no common harmonics. These two frequencies always have one component from a group of four low-frequency values and a second component from a group of four high-frequency values, as is shown in Table 2.1 below.

Table 2.1. DTMF Tone Combinations |

Basically, to be a valid DTMF tone, each of the two frequency components needs to be within about 1.5% of their nominal value, and the difference in signal strength between the two components (known as “twist”) must be less than 3 dB.

Using a bandpass filter for each of the eight frequency components plus one for the overall signal bandwidth, a detector can examine the outputs of each filter to determine that only two frequency components are active at any given time, that the two components are a valid combination (one from the low-frequency group and one from the high-frequency group), and that their relative strength is acceptable.

**Bandstop Filters**

The bandstop, or notch, filter can be viewed as the complement to the bandpass filter in much the same way that the high-pass filter is the complement of the low-pass filter.

Whereas bandpass filters allow only a relatively narrow band of frequencies to pass, bandstop filters sharply attenuate a narrow band of frequencies and leave the rest relatively untouched. Figure 2.13 below shows an example of a bandstop filters.

Figure 2.13. Idealized Bandstop Filter |

By far the greatest application of bandstop filters is in the reduction of powerline noise centered around 50 Hz or 60 Hz (depending on location). In many applications, the 50-Hz or 60-Hz power signal will couple into the sensing circuitry and, unfortunately, the power signal’s frequency often is in the midst of the frequency spectrum for the signal of interest.

A simple low-pass or high-pass filter that would exclude all frequencies above or below the power frequency would attenuate the desired signal too much in such cases, so designers try to remove only the frequency components right around that of the power.

**Digital Filter Implementations**

To this point, our exploration of filters has been strictly along conceptual lines; we turn now to the actual mathematical implementation of these filters.

In general, digital filters are created by applying weighting factors to one or more values of the sampled data and then summing the weighted values. For instance, if we have a sampled input signal x_{i} for i = 0, 1, …, N – 1, we can generate a filtered output y

where the a_{i} sampled input signal value. An example of a low-pass filter is an averaging filter, whose output is simply the average of a given number of samples.

This smooths out the signal because noise is averaged over the entire group of samples. If we choose to average four samples to get our filtered output, the corresponding equation would be:

By adjusting the weights of the individual taps of the filter (the sampled data values), we can adjust the filter’s response. To make things easier for designers, a number of companies make digital filter design and analysis software, and free versions are available on the Internet as well.

The preceding example illustrates what is known as a finite impulse response or FIR filter structure. Filters constructed using this approach always have a fixed number of taps, and thus their output response depends only upon a limited number of input samples.

If we pass the impulse signal through a filter of length N taps, we know that the filter’s output to the input will die out after N samples, since all subsequent input values will be zero.

Another filter structure is the infinite impulse response or IIR filter. IIR filters use both weighted input signal samples and weighted output signal samples to create the final output signal:

where the b_{i} terms are constant weighting terms that are applied to the corresponding y_{i-1} terms.

At first glance, it would appear that we’ve made the filter much more complex, but that’s not necessarily the case. Looking at the four-tap averaging filter that we examined for the FIR filter, we could implement the same function as:

While reducing the computational requirements by a single tap may not seem particularly important, more complicated filters can see a significant reduction in computational and memory requirements using an IIR implementation.

This reduction comes at a cost, however; unlike FIR filters, IIR filters can theoretically respond to inputs forever (hence the name of the structure), which may not be at all desirable. Designers also have to be careful to ensure that errors don’t accumulate or else performance can degrade to the point where the filter is unusable.

**Median Filters**

All of the filters that we’ve discussed so far are based on simple mathematical equations, so their behavior is easily analyzed using well-known and well-understood techniques.

These filters tend to work best with noise that is contained to specific spectra, which is often an appropriate design model. Sometimes, however, systems are susceptible to what is called shot or burst noise, in which the measured signal has bursts of noise rather than a continuous noise signal.

To counteract this, systems may employ another form of filtering known as median filtering that is somewhat more heuristic but does an excellent job of reducing shot noise.

In a median filter, the signal is sampled as in the other forms of filtering, but rather than performing a simple mathematical operation on the samples, the samples are ordered highest to lowest (or vice versa, it doesn’t really matter which), and then the middle or median sample is selected.

Figure 2.15a. Sample “True” Signal |

If the length of the median filter is greater than the length of the noise burst, the noisy signals should be completely eliminated. An example of a length-7 median filter and its effect upon a signal corrupted with shot noise whose burst is a maximum of three samples is shown in Figures 2.15a above, as well as in Figure 2.15b, Figure 2.15c and Figure 2.15d.

To read Part 1 in this series, go to **“Foundational DSP Concepts for Sensors”**

Next in Part 3: **The effect of digitization on the sampled signal.**

**Creed Huddleston** is President of Real-Time by Design, LLC, specializing in the design of intelligent sensors, located in Raleigh-Durham, North Carolina.

*This series of articles is based on material from ”Intelligent Sensor Design” by Creed Huddleston, used with permission from Newnes, a division of Elsevier. Copyright 2007. For more information about this title and other similar books, please visit www.elsevierdirect.com.*