Quiet down out there!
A short review
In this series, I deliberately started slowly, and moved forward in baby steps, so as not to leave anyone behind. I first chose the absolutely simplest example I could think of, which was to take the average of a set of numbers:
The mean, or average is, of course, given:
The variance and standard deviation are measures of the level of noise in the data. To compute them, we defined the measurement error, or residual, to be:

The variance, then, is the sum of squares of the residuals:
The standard deviation is simply:
To illustrate the method, I generated an example that produced a graph like the one in Figure 1. The two lines denoted "Upper" and "Lower" delimit an error band, given by
+σ and
-σ. These limits are not hard and fast, of course. As you can see, many of the data points lie outside this "one-sigma" error band. Even so, it's a good measure of the average amplitude of the noise.

Click on image to enlarge.
To get the result of Figure 1, we processed all the elements of y at the same time, just as they're shown in Equations 2 through 5. That's called batch mode processing. It's obviously a simple way to go, but it's not well suited to real-time systems because the time it takes to recalculate the mean is proportional to the number of elements in y. In real-time systems, we usually need the results quickly, and we're often limited in data storage space. For such cases, a better approach uses sequential processing, where we process only the one new data point each time it comes in.
To support sequential processing, we defined two running sums. For a given value of n ≦ N, they are:
Each time a new measurement, yn, comes in, we must update the sums according to:
Then the updated mean and variance are given by:
And:

Using this algorithm, we got a figure like Figure 2.

Click on image to enlarge.


Loading comments... Write a comment