Get More Yet - Embedded.com

Get More Yet

Jack's Rosetta Stone equation leads to some useful predictive techniques. Here's one based on difference tables.

We've been talking about digital data processing, how to reduce theory to practice, and how to really “get it” when it comes to advanced methods. I still have a lot to say on the subject—in fact, I'm just warming up—but first I'd like to take care of a few items of old business.

Unfortunately, we let a production error slip past us in the last column (February 2003, p. 13), which caused unnecessary confusion. In deriving the formula for the bilinear transformation, I had left out a factor of two in the power series for the natural log function. Equation 12 of the last column should have read:

(1)

The factor of 1/2 was left off, which confused many of you, especially since its reciprocal showed up again in the bilinear transform itself:

(2)

Sorry about that.

To recap, if you begin with the “Rosetta Stone” equation:

(3)

and revert it to give:

(4)

Equation 2 follows immediately by truncating the power series of Equation 1 to the first term.

You may be asking, well, if the power series is the “real” translation, why do we limit it only to one term? Can't we do better if we use more terms? The answer is no.

Trust me, I've tried this; adding more terms does indeed help, but it complicates the formulas quite a bit, since z appears in both the numerator and denominator of all the terms. As things turn out, this is one instance, perhaps the only instance you'll see in these recent columns, where using a higher order formula (more terms in the power series) doesn't really help.

The complexity of the equations requires extra computing time, and, in this case, we're better off simply increasing the sampling rate, thereby cutting the step size.

Back to Rosetta
My goal last time was to show you how easily one can progress from a high-level and somewhat esoteric formula—Equation 3—to a practical implementation in real and usable embedded software. Admittedly, the example I chose was rather puny: a simple, first-order low-pass filter. Nevertheless, the method is appropriate for transfer functions of any order or level of complexity.

The thing I wanted to burn into your memory is not so much the results but the process. We began with the Rosetta Stone equation and used it to derive a power series, Equation 1. We then truncated the series and used that result to replace one operator, s, with another, z. Lastly, we rearranged the formula into something that can easily be turned into code. In the case of the s- to-z transformation, the key element is the bilinear transform, Equation 2.

Recall that any dynamic system, whether it be mechanical, electronic, or a control system, can often be described by a construct called the transfer function:

(5)

where P (s ) and Q (s ) are polynomials in s. Space doesn't permit me to go into all the properties of transfer functions in the s- domain, but one of the important properties is that transfer functions multiply. That is, if one dynamic system with transfer function H 1 (s ) drives another one with transfer function H 2 (s ), the composite transfer function is simply the product:

(6)

In particular, the input and output signals are related by the definition of the transfer function:

(7)

This property is one of the main reasons controls engineers like to work in the s -domain. The property is not true in the time domain.

Incidentally, some of you readers noticed that, in my example, I treated X (s ) and Y (s ) as the simple sequences of numbers, xi and yi . You asked me how I could get away with not taking the Laplace transform of the sequences, since the Laplace transform is, in the end, the mechanism for which the operator s is defined.

You'll notice that I weasel-worded the explanation quite a bit, basically saying, “The reason is not obvious and requires a lot of explanation.” I wasn't kidding, and we really don't have time or space to explain here. But I can perhaps help more than I did.

Here's the deal in a nutshell. The operator s is not really the same as the derivative D, though it might as well be. The Laplace transform is a very esoteric construct, similar and related to the Fourier transform. There are rules about how to generate the Laplace transform and its inverse transform, just as there are for the Fourier transform. The inverse transform of a parameter involves a term containing its initial condition. That shouldn't surprise you, since every time we integrate anything, there's always a constant of integration to be determined by one method or another.

However, you also know that initial conditions don't matter when we're seeking only steady-state solutions. The frequency response of a system is an obvious example. We don't really care about startup transients; we're only interested in what the signal looks like after startup transients have died down. Apply a steady, single-frequency sine-wave tone to a system and eventually it will settle down so that it produces the same frequency at the output. There may indeed be an initial transient as the wave is first applied, but all we care about is the amplitude and phase of the output sine wave. And that's the reason we don't do a full-up inverse Laplace transform.

I realize that this is not a very rigorous explanation. But trust me, the approach works in exactly the way I've described. If it makes you feel any better, Dr. Heaviside was greeted with similar skepticism when he took the same shortcut. It was decades later that mathematicians were finally able to put the operator-based Heaviside calculus on a rigorous footing. Meanwhile, Heaviside and his students were calmly going about their business, getting useful results, and building systems that worked.

More operators
Let's recap where we are. At this point, you've seen the Rosetta Stone, Equation 3, and its inverse, Equation 4. Last time, you learned that Equation 4 isn't just some esoteric mathematical curiosity, but a useful equation that leads directly to the bilinear transform (Equation 2). I also showed you how useful the bilinear transform can be in converting a transfer function from the Laplace-transform domain of s to the discrete world of z. The only thing I can do to add to that process would be to repeat the conversion for a more complex filter. We can do that, if you like, but first you need to know that converting D (or s ) into z and back is a long way from the extent of what the operator techniques we've been studying can do for you. For starters, D and z aren't the only operators we can use.

Suppose you have a sequence of values x = {x0 , x1 , x2 , …}. We've already seen how the operator z advances from one element of the sequence to the next:

(8)

Another useful operator is the forward difference operator, Δ:

(9)

Fans of calculus will immediately recognize this notation, since it's the mechanism by which we define a derivative:

(10)

The usage is similar, but not exact, since Equation 10 involves the derivative of a continuous function, while the Δ of Equation 9 is for discrete sequences. Even so, those of you who are comfortable with calculus should be comfortable with the definition. Δx is a (hopefully small) difference between two adjacent points, either on a functional curve or in a sequence.

One item you may have missed is that the operator involves a very specific choice of the adjacent points. Equation 9 requires that the two points be the current point, xn , and the next one in the sequence, xn +1. That's why it's called the forward difference.

In addition to a forward difference operator, there's also a backward difference operator. It's written as an upside-down delta:

(11)

As their names imply, the forward difference operator represents the difference between the current point and the next one in the sequence. The backward difference operator involves the current value minus the previous one.

It should go without saying that the two difference operators are closely related. They're also related, less closely, to z. For example, look at Equation 9. Using Equation 8, we can write it as:

Or, “dividing out” the xn :

(12)

Likewise:

(13)

Now, take Equation 11 and solve for z to get:

(14)

Substituting this into Equation 13 gives:

which simplifies to:

(15)

Conversely:

(16)

Finally, reverting Equation 13 gives:

(17)

What good is it?
Do you see the value of all these operators yet? Well, watch this. Suppose you have a table of measurements of some input parameter; call it x. The last measurement we made was xn . We'd now like to extrapolate the data to estimate what the next measurement is going to be. That measurement, of course, should be xn +1, which we know is equal to zxn . But, according to Equation 17, that's equal to:

(18)

But, you ask, what good does that do? How the heck do we take the reciprocal of a term involving an operator? Elementary, my dear Watson: create yet another power series. My handy-dandy math handbook tells me that:

(19)

(You can also generate this series by synthetic division, or, better yet, prove it by multiplying by the denominator again.) Put it all together, and you get a practical formula for estimating xn +1:

(20)

We haven't talked yet about how to build tables of differences, but it's simple enough. First, we define what higher powers of ∇ mean. They're defined just as the operator itself is defined. For example:

(21)

You can see how the higher orders will go. We could expand each difference above even further, but that defeats our purpose here. What we must do is start with the sequence of numbers, xn , and build up tables of the differences. To make the process more obvious, I'm going to generate a function and its tabular values. Then we'll work together to build the differences and, finally, the extrapolated value.

A good function is:

(22)

I chose this function because it's neither simple nor a polynomial. The latter is bad because the error will be zero (the estimation will be perfect). The function is shown in Figure 1, with markers indicating the sampling points. I think you will agree this is a pretty crude sampling frequency—only 20 points per cycle.


Figure 1: An example function

Here's how we build the tables. First, write down as many of the x values as you think might be useful. Then, in the next column, write down the difference of the current value and the previous one (note that you can't do this for the first entry, since there is no x-1 ). Do the same thing in the next column, taking the difference of those adjacent values. Table 1 shows the table partially filled out. See how it works? Each entry in the difference tables is the difference of the entry just to the left of it and the entry to the left and up one.

Table 1: The difference table

t x ∇x 2 x 3 x 4 x 5 x
0. 0
1. 0.29946 0.29946
2. 0.55199 0.25253 -0.04693
3. 0.736252 0.184262 -0.06827 -0.02134
4. 0.838748 0.102496 -0.08177 -0.0135 0.007841
5. 0.854636 0.015888 -0.08661 -0.00484 0.008656 0.000816
6. 0.787669 -0.06697 -0.08286 0.003752 0.008593 -6.3 x 10-6
7. 0.649309 -0.13836 -0.07139 0.011462 0.00771 -0.00088
8. 0.45716 -0.19215 -0.05379 0.017605 0.006143 -0.000157
9. 0.29946 -0.22425 0.0321 -0.021687 0.004082 -0.00206
10. 0.29946 -0.23291 -0.00866 0.023442 0.001755 -0.00233

Now let's assume that we have taken six measurements so far and want to estimate the seventh, x 6 . According to the formula:

(23)

A little math shows that to be x 6 = 0.788548098. The actual value, as we see from the table, is 0.787669, so we're off by an error of 0.00088, or considerably less than 0.1%. Considering the very sparse table of data (notice how much the x values change from sample to sample), that's not bad at all. To see that error go to zero in a big hurry, simply increase the sample rate. We could also use yet higher-order differences. Either way works, within limits.

But wait …
I chose to implement the extrapolation using difference tables rather than, say, powers of z -1 , because I wanted you to see the difference operators and the way one builds a difference table. Using difference tables is traditional in the old, hand-calculated formulas—for good reason. As you can see from Table 1, the values of the differences get smaller each time we take another difference (if they don't, something's wrong). When doing hand calculations, it's easy to see when we've taken enough differences

We'll talk about this more next time, and to prove that we could have used a formula in z -1 , I'll repeat the process using those operators. See you then.

Jack Crenshaw is a senior software engineer at Spectrum-Astro and the author of Math Toolkit for Real-Time Programming, from CMP Books. He holds a PhD in physics from Auburn University. E-mail him at .

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.