# Shadow Dancing

**Filters aren't just for removing noise and other unwanted parts of a signal. They can, in fact, be used to analyze unknown systems in real time.**

When most people think about filters, they think about using them to remove some unwanted effect in a signal. There are more ways to use a filter than to just remove noise; one of those ways will be the focus of this column. The most commonly used filters-IIR and FIR-use a mathematical process called the *dot product.* The dot product of two vectors will produce a single value directly proportional to the correspondence between the two vectors-in other words, if no relationship exists between them, the output will be zero. Anything other than zero indicates that they have a relationship of some sort.

Clearly then, if we have a vector that characterizes a frequency response of, say, 500Hz and less, and we apply this vector to an input data stream, we will output a data stream of only those values that are 500Hz and less. Any higher frequencies will be zero. Naturally, then, this mechanism becomes useful for removing unwanted elements-and that is an important aspect of the function-but is not the only way to look at what it does. This month, we continue our discussion of another way of looking at IIR and FIR filters-as observers.

**Black boxes**

Most of us, from time to time, will have to work with or analyze a system that is little more than a black box. For some reason we have no data on it, either because it does not exist or because we cannot access the internals of the black box to determine how it works.

We might be trying to control a motor without feedback, or perhaps we want to remove noise from a signal but we don't know the frequency of the noise or the frequency of the signal we want to clean up. Either way, we need to know the values of some critical variables available in the signal or process at run-time but not otherwise. In previous columns, we discussed the Kalman filter and the Luenberger observer and how they can give us access to the inner workings of an unknown system by imitating (tracking) it. As these devices track the unknown system, we can watch their behavior and use the information that they develop. These devices are called *observers* because they work alongside the unknown system, tracking its activity to produce the same responses as the unknown system.

In my last column, I introduced some pseudo-code for an elementary form of observer: a simple adaptive filter. In this column, I will supply you with some Matlab code so that you can actually play with the device and see how it works yourself. The Matlab code is meant to be clear and instructive; I will leave the optimization to you.

**The for-loop**

First, I will introduce the part of the algorithm that actually does the work: a simple for-loop. This loop accepts an input vector or data stream called **xn()** and produces an output vector or data stream called **y()** . An outline of the work of the loop might look like:

- Decision point in program control-determine whether the for-loop has processed all the samples it was supposed to. If it has, go on; if not, process the for-loop.
- Get a new sample from the input stream or vector,
**xn()**, and put this sample in**x(1)**, the head of the buffer we will convolve with the coefficients (impulse response) of the FIR filter. (Notice at the end of the for-loop that the data in the history buffer is moved one slot down to make room for this new datum when it returns to the beginning of the loop.)- Convolve the history buffer,
**x()**, with the filter coefficients,**weights()**, and produce the result in**y()**. - Find the difference between the signal we are tracking,
**data()**, and the output of the filter,**y()**. We call this the error and we use it to correct our adaptive filter. - Using a second for-loop,

- Convolve the history buffer,

- Calculate a new set of weights based on the error, our beta factor, and the history buffer.
- Move all the data samples in the history buffer down one to make room for the new sample placed at the head at the beginning of the main for-loop. This results in the oldest sample being lost.

In this algorithm, we are implementing an asymmetrical FIR filter as the adaptive filter. An FIR filter is a simple dot product of two vectors: a time-reversed input vector, **x()** , and a set of coefficients, **weights()** , representing the impulse response of the filter to apply. The dot product will reinforce those elements of the input stream that are available in the impulse response and remove those elements that are not. Using Matlab:

**interim=0;for l=1:orderÂ Â Â Â interim=(weights(l)*x(l))+interim;Â Â Â Â endy(k)=interim;**

i**nterim** is a temporary variable we use to accumulate the dot product; the **weights()** are the filter coefficients; **x()** is the input vector; and **y()** is the output of the filter.

This filter can be run in real time with a continuous stream of data samples, but because I don't have such a continuous stream of samples, I am going to create a vector of 1,024 data samples and feed it to the filter one at a time, just as though it was a real-time system processing data.

But where do the filter coefficients (**weights()** ) come from? Since the nature of the observer is to track a target system or signal, we take the difference between the output of our filter, **y()** , and that of the target. Then we multiply the error (the difference between the output and the target) by the input vector and our gain factor Î’. Finally, we add the result of this multiplication to the weights themselves. We introduced Î’ in the last column:

As you will recall, Î’ controls the convergence of the algorithm, *N* is the length of the filter, and *P _{x} * is the average power of the input signal-the larger the value of Î’, the faster the convergence.In most cases, this formula should serve as a guide; begin there and develop your actual value heuristically. In Matlab code, the loop I am describing looks like this:

**for k=1:NÂ Â Â Â x(1)=xn(k);Â Â Â Â y(k)=0;**

interim=0;

for l=1:order

Â Â Â Â interim=(weights(l)*x(l))+

Â Â Â Â Â Â Â Â interim;

Â Â Â Â end

y(k)=interim;

error(k)=data(k)-y(k);

**for l=1:orderÂ Â Â Â Â Â Â Â p=order+1-l;Â Â Â Â Â Â Â Â weights(p)=weights(p)+Â Â Â Â Â Â Â Â Â Â Â Â (2*beta*error(k)*x(p));Â Â Â Â Â Â Â Â t=ne(p,1);Â Â Â Â Â Â Â Â if t x(p)=x(p-1);Â Â Â Â Â Â Â Â endÂ Â Â Â endend**

In the filter just described, a knowledge of the reference signal, that is the signal we are searching for in the noise-here, **dn()** -is an important factor. This might be a sample of the actual signal we are looking for, or looking to get rid of, depending on our goals. If we don't know what the signal is but have a pretty good idea of the character of the noise it is embedded in, we can track the noise and subtract it from the signal.

To make this code fragment work, it is simple enough to create a cosinusoid:

**N=1024;for l=1:Ndn(l)=cos(2*3.14*10*(l/N));end**

This will yield a column vector of cosinusoidal samples at 10Hz.And to show that the filter is doing what we want, we need to add some noise, so:

**for l=1:Nnoise(l)=rand;end**

**for l=1:Nxn(l)=dn(l)+noise(l);end**

We add this to the signal vector to get **xn()** . But before we process the data, we have to normalize it:

**for l=1:Nif max < noise(l)="" max="">elseif min > noise(l) min=noise(l); endend**

**for l=1:Nif max < xn(l)="" max="">elseif min > xn(l) min=xn(l); endend**

**for l=1:N noise(l)=noise(l)/max;endfor l=1:N xn(l)=xn(l)/max;end**

**Results**

Figure 1 is a plot of some of the inputs and outputs associated with this filter. The original signal, **dn** , is shown in the top or first plot-it is a simple cosinusoid at 10Hz. The second plot, **xn** , is dn with zero mean random noise added. The third plot is the output of the filter itself, **y** . After a short capture period-this depends upon the value of the Î’ coefficient-it begins tracking the input quite well.

The fourth plot is the error. At start up, the error is large, but it quickly approaches zero as the filter captures the signal. This is interesting in itself, but there is more.

Now that we are tracking the signal with our filter, we can look at the filter mechanism and derive more data about it. Because we can successfully filter the noise from the signal-as you see in Figure 1-we can be confident that the impulse response of the filter is similar to that of the signal being tracked. Since we know that the impulse response of a system completely characterizes that system, this is significant information, especially if we are trying to determine what is inside the black box or predict its next wiggle. With the impulse response, we can derive the transfer function and frequency response.

The fifth plot in Figure 1 is the impulse response of the filter and, therefore, of the system we are tracking. The sixth plot is an FFT of that impulse response-you see that it depicts a frequency at the low end of the spectrum. In fact, it is the frequency of the signal we are tracking.

If we are more concerned with what I have called “noise” in this discussion, we can also develop that information either by subtracting what we now know from the input or by:

**error(k)=noise(k)-y(k);**

making the filter track the noise instead of the data. Although this will work, it is probably more efficient to track the more stable signal and subtract that from the input than to try to track the noise.

Clearly, the feedback signal or error is the determining factor in what we lock onto with this filter. It could be the target signal or the noise, or it could be anything you like. And phase does not prove to be a problem either; the algorithm attempts to maintain a stable relationship between the reference and its output, and then it seeks to make that difference as small as possible.

The biggest problem with this form of the filter, though, is that you need a reference.

**Signal power**

There will be plenty of times when you don't know what the original signal looks like. What if you want to find the power in a noisy signal and track that? We can do that too.First, let's review a couple things. Remember that correlation is the process of determining how much of one signal there is in a second (sometimes the same but delayed) signal. The beauty in this is that if a signal is a zero mean random signal, it will disappear under this kind of analysis, since its energy is uniform throughout the spectrum and, therefore, has no central peak. Let's say you have a “production quality” sinusoid at 100kHz mixed with some 60Hz line noise and white noise. After properly correlating the sinusoid with itself, you should find a strong peak at 100kHz, as this is the sole and original content of the signal, and another peak at 60Hz whose magnitude depends entirely upon the degree of interference. This is because 60Hz is not a zero mean random noise; it is an annoying but focused signal. The white noise will disappear.

Correlation is a dot product, just as an FIR filter is. The difference is that an FIR filter performs a time reversal before the convolution. But with a dot product, it doesn't matter which of the vectors you reverse. You will get the same result. So it is possible simply to reverse the weights in the filter. An asymmetrical FIR filter has an impulse response that is a mirror image on each side of time zero-an asymmetrical FIR filter reads the same way forward and backward. As a result, it may also be a correlation.

This means that the same program I introduced here may act as both a filter and a correlation if applied to a signal and that same signal delayed. To do this requires only a small change to the algorithm presented; this change lies in how we develop the error. Instead of subtracting from a reference signal, subtract a delayed version of the input. This should find the real energy in the incoming data stream and produce it as an output. The Matlab code is modified as follows:

**for k=1:Nx(1)=xn(k);y(k)=0;**

**interim=0;for l=1:orderÂ Â Â Â Â Â Â Â interim=(weights(l)*x(l))+Â Â Â Â Â Â Â Â Â Â Â Â interim;Â Â Â Â Â Â Â Â end**

**y(k)=interim;err(k)=y(k)-xn(k+2);error(k)=dn(k)-y(k);**

**for l=1:orderÂ Â Â Â Â Â Â Â p=order+1-l;Â Â Â Â Â Â Â Â weights(p)=weights(p)+Â Â Â Â Â Â Â Â Â Â Â Â (2*beta*err(k)*x(p));Â Â Â Â Â Â Â Â t=ne(p,1);Â Â Â Â Â Â Â Â if t x(p)=x(p-1);Â Â Â Â Â Â Â Â endÂ Â Â Â end**

end

** **

As you can see, we are using the output of the filter and the input delayed by two samples to track the filter.

If this works correctly, we can use this to find and track an unknown signal, and in doing so, discover with the same tools we used in the first example the impulse response of the unknown data set as well as the frequency response. As you can see in Figure 2, this works. The topmost plot is a cosinusoid-now at 100Hz-with noise. The second plot is the cosinusoid I buried in the noise. The third plot is the output of the adaptive filter, the fourth is the error signal, now called **err** .

The fifth plot, again, is the impulse response of the filter once it has locked, and the sixth is the spectrum of that impulse response, this time identifying the energy in that signal as 100Hz.

This simple adaptive filter and others like it are useful in the analysis and prediction of unknown systems.

**Don Morgan** is a senior engineer at Ultra Stereo Labs and a consultant with 25 years of experience in signal processing, embedded systems, hardware, and software. His e-mail address is

Return to July 2001 Table of Contents