How to find signals in noise using estimation - Embedded.com

How to find signals in noise using estimation

One of the most challenging aspects of digital signal processing is finding signals in noise when familiar tools such as averaging and low-pass filtering don't work. Maximum-likelihood estimation is another technique to extract information from a sea of noise.

Not only is unwanted noise is the bane of modern living, it can also be a stumbling block for the embedded systems engineer working with digital signal processing. Figure 1 shows the graph of a signal that's almost lost in a sea of noise. This article will show you how to write code to estimate the peak amplitude of a signal despite the noise. This technique is especially useful in analyzing rare events in which averaging isn't an option but you still have to determine if a signal's amplitude is above or below an alarm threshold. Although you could use a low-pass filter, you'd then have two computational steps (filtering followed by a search for the peak) to prepare the signal before you can measure it.


Figure 1: Example signal in Gaussian noise

The technique we'll use to estimate the parameters of a signal embedded in noise belongs to a class of statistical methods called maximum-likelihood estimators (MLEs for short, and also called optimal estimators ). MLEs are computationally cheap, intuitive to understand, and straightforward to program.

By the way, in case you're wondering, the noise in Figure 1 hides a unit-amplitude, half-cycle sine wave; the signal-to-noise ratio (SNR) is “2.45dB. Figure 1's peak amplitude, estimated using the techniques presented in this article, is 0.986 with an error of 1.4%.

Mathematics
The mathematical formulation of the MLE for the received signal in Figure 1 is:

(1)

where â is the estimate of the amplitude of the transmitted noise-free signal, constant u is the time integral of the unit-amplitude signal in the absence of noise, and r describes the received noisy signal as a function of time. In this example, u is determined by computing:

(2)

The integrals in Equations 1 and 2 are evaluated over the time interval [0, T ], which is the duration of the signal. In this case, T is the duration of the half-period of the sine wave. The sine function takes a single argument that is an angle in radians, so the time interval maps to an angle by the simple relationship π/T , expressed in units of radians per second.

Of course, not every noisy signal you encounter will be a half-cycle sine wave. For these other situations, the general forms for Equations 1 and 2 are:

(3)

and:

(4)

where function s is the unit-amplitude transmitted signal, and r is the received signal as defined for Equation 1. In practice, u can be determined during a calibration run with a signal of known amplitude. Applying u to signals of unknown received amplitude in subsequent runs gives the estimated amplitude of the received signal as a fraction of the calibration standard. The derivation of Equations 3 and 4 is well beyond the scope of this article; you can explore it further by consulting the references at the end of the article.

If you already have a collection of algorithms for approximating definite integrals, you should now have enough information to start coding an MLE. But if you read on, you'll learn some tricks for implementing MLEs efficiently and you'll better understand the trade-offs to consider when deciding if this technique is appropriate for a particular project.

Implementation
Because this article doesn't focus on numerical-integration methods, the examples that follow use the extended form of Simpson's rule to approximate integrals, a technique that's widely used, accurate enough for many applications, easy to understand, and straightforward to program. The numerical approximation of the integration in Equation 3 by extended Simpson's rule looks like this:

where h is the equally-spaced sampling interval computed for n samples as:

and:


Figure 2: Using a spreadsheet to calculate MLE coefficients

In many implementations, it's possible to use the distributed property of multiplication and your knowledge of s (t ) to precompute the coefficients and store them in a table rather than compute each term at each signal acquisition. You can do this part in a spreadsheet, as shown in Figure 2 for a half-period sinusoidal signal taken as nine samples equally spaced in time. Using the precomputed coefficients from Figure 2, an entire MLE can be expressed as a fragment of C code as shown in Listing 1.

Listing 1: A maximum-likelihood estimator in C

double coeff[] = {0.00, 0.128, 0.118, 0.308, 0.167, 0.308, 0.118, 0.128, 0.000;};
double estimate = 0.0;
int samples = (sizeof(coeff) / (sizeof(double));

for (int i = 0; i < samples;="">
{
  estimate += GetSample() * coeff[i];
}

The code in Listing 1 shows that the MLE method is efficient. But you can do much more than just tabularize the coefficients needed to approximate the integral. Any multiplicative constant can be folded in. For example, if a calibration is associated with each sample, as would be the case if the received signal were extracted from a linear charge-coupled device (CCD) array rather than from a single detector, the gain of each CCD element can be applied to make the final line of Listing 1 become:

estimate += GetSample() *
   coeff[i] * gain[i];

Note that if extreme accuracy is important or you're doing scaled-integer math with a limited number of bits, you need to take time to evaluate how error propagates through the entire chain of multiplications and the final summation.

To determine whether an MLE is the appropriate solution for your problem, you have to remember that you're dealing with a signal in random noise. Therefore, you can't determine the absolute worst-case performance analytically. Instead, you have to be content to work with the mean and variance of the estimate. With enough math, you can show that the MLE is a good estimator—the variance of the estimate approaches zero as the number of samples approaches infinity.

If you need to understand the behavior of an MLE to this level, analytical methods for determining average performance as a function of the amount of noise contaminating the signal and the number of samples are available in the references at the end of this article.


Figure 3: Average error estimating the peak of the received signal as a function of SNR and number of data points

For most applications, though, you can approach questions of accuracy with rules of thumb:

  • The more noise you have, the more samples you need to take to get rid of it—no surprise there. The average error of the estimate decreases as a power of the number of samples and also decreases with increasing signal-to-noise ratio. Figure 3 illustrates these relationships with data from a simulation experiment for our half-cycle sine wave example. The variance of the estimate exhibits similar properties.
  • The algorithm you choose to approximate the integral also influences accuracy. For extended Simpson's rule, error decreases approximately with the fourth power of the number of points.

Design
Regardless of how much accuracy you need for your application, the accuracy you get from any practical estimation system will be limited by the number of samples you process. Since system resources limit the number of samples the system can process, tradeoffs are necessary. The trade space shown graphically in Figure 3 is summarized as the relationships among design goals for SNR, sampling rate, and accuracy in Table 1.

Table 1: Tradeoffs for maximum-likelihood estimators

Design goal Effects on design Figures of merit
Increase estimation accuracy Increase sampling rate Average error, variance of error
Handle additional noise Increase sampling rate Signal-to-noise ratio
Decrease processor utilization Decrease sampling rate Processor speed, memory

By now, you've seen that maximum-likelihood estimators have a number of characteristics that make them desirable for use in embedded systems:

  • MLEs use all the information available in the received signal.
  • Each data point acquired only needs one multiplication and one addition
  • You have your result as soon as you process the last sample
  • MLEs have advantages over time averaging and finite-impulse-response filters in not requiring storage for bins of samples or individual samples.

Even with these benefits, though, MLEs are not a one-size-fits-all solution; any estimation technique has its limitations. For example, the MLE described in this article requires advance knowledge of the shape and the phase of the transmitted signal. The signal also has to be a single-valued function that is differentiable over the domain 0 t T . You also have to be able to assume the noise is additive, zero-mean, and Gaussian with frequency content higher than that of the signal you're trying to estimate. Finally, you'll need memory to store the coefficients if you can't or don't want to compute them on the fly. Fortunately, many real-world applications fit these limitations.

Here's a summary of the design process for an MLE:

  1. Determine the accuracy needed for the quantity you're estimating. This isn't the required accuracy of the entire system, only that part allocated to the estimation method to be implemented in software. Other subsystems such as optics and electronics take their share; as usual, software has to work with what is left over.
  2. Determine the amount and type of noise in the input signal.
  3. Determine the amount of processing time and memory available for use by the MLE. Unless you're using a processor dedicated to performing the estimation, you'll have to work with some fraction of the total processor bandwidth.
  4. Pick the number of samples needed to give the required accuracy, considering the amount of noise expected.
  5. Choose an algorithm to approximate the integral. This decision also has an influence on accuracy, so you may need to do Step 4 again.
  6. Decide if you'll use precomputed tables of coefficients or will compute coefficients in real time with each sample. If you choose the table form, determine the number of significant digits you need to store for each coefficient to meet your accuracy requirement.
  7. Estimate processor time and memory consumed from your decision in Step 6.
  8. If you exceed the resource budget determined in Step 3, you'll have to go back to Step 1.

If you get to Step 8 without an implementable or practical design, you'll have to renegotiate with other system stakeholders, asking for a faster processor, more memory, or a relaxation of the accuracy requirement.


Figure 4: Example of air-pollution monitor

Applications
Figure 4 shows a real-world example of how a maximum-likelihood estimator was used to estimate the concentration of an atmospheric pollutant to the level of a few parts per billion.

The optical elements of the system included a light source producing a collimated beam at the wavelength of an absorption peak of the pollutant, a diffraction grating driven by a software-controlled stepper that produced a transmitted signal modulating wavelength in time, a cell with transparent end windows to hold a sample of gas, a calibration cell containing a known concentration of pollutant that could be positioned in the optical path, and a photodetector. The light beam passing through the sample cell was attenuated in proportion to the concentration of the pollutant in the sample cell.

The signal-processing elements of the system included software that controlled the stepper motor that moved the diffraction grating, an equal-interval sampler, an averager, and an MLE. Constant u (Equation 2) was determined by purging the sample cell with gas free of the pollutant, inserting the calibration cell, and performing a measurement. Calibration was checked by inserting the calibration cell and repeating the measurement.

Software controlled both the position of the grating and the timing of sampling, enabling the wavelength of each sample to be known. Averaging over long intervals (tens of seconds) removed much of the noise. The MLE estimated the amplitude of the signal without having to explicitly search for the peak in residual noise. This combination of averaging and an MLE was dictated by limited processing resources, which in turn limited the number of samples that could be taken in each half-cycle of the signal. The integration was performed by an eight-term Newton-Cotes approximation using coefficients derived from Lagrange interpolation polynomials developed specifically for the input waveform.

Multiple choice
You can develop maximum-likelihood estimators to estimate other signal parameters, including phase, arrival time, and frequency. Simultaneous estimation of multiple unknown parameters is possible. Other techniques can also help you estimate parameters other than zero-sum, white Gaussian noise, and for samples that aren't equally spaced in time. You'll find these techniques in McDonough and Whalen's book Detection of Signals in Noise .

Trace Baker has developed embedded software for measurement and control since the days when a 4KB memory card required three power supplies and cost about $2,000. He specializes in mission-critical systems for regulated markets, and currently works in the aerospace industry. Contact him at .

Further reading
Abramowitz, M. and I. A. Stegun, , Washington, DC: U.S. Government Printing Office, 1964.

McDonough, R. N. and A. D. Whalen, Detection of Signals in Noise (2nd edition), San Diego: Academic Press, 1995.

Schaeffer, R. L. and J. T. McClave, Probability and Statistics for Engineers , Belmont, CA: Duxbury, 1995.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.