Forward Interpolation - Embedded.com

Forward Interpolation

Engineers are always looking to make their designs more efficient. Don has a DSP technique that should ease your computational burden.

Interpolation and approximation are thorny issues for software engineers. This is especially true when it comes to signal processing, where engineers must derive useful values with sufficient precision and accuracy.

Space is at a premium; every new board is smaller, with less room for parts and traces. Projects are increasingly cost sensitive; the parts and hardware design are often done without any input from the engineer who must program the board and make it work. And even when collaboration takes place among the various members of the team, the result is invariably a compromise between cost, size, and functionality.

Very often this means that the DSP engineer is cramped for memory and working with a processor that has less speed and word size than he would like. Still, overcoming these challenges can produce better products and bring the genius out of the individual engineer. Difficult circumstances often lead to solutions both brilliant and elegant.

In my last column (“Educated Guessing Games,” March 2002, p. 59), I took the most commonly used form of interpolation-linear interpolation-and compared it to Lagrange interpolation. I showed you how, by means of Lagrange interpolation, you can use a 10-point table to achieve the same precision and accuracy as a 1,000-point table in linear interpolation. This is because we used a polynomial fitted to the n data points on our table. After all, linear interpolation merely draws a straight line between any two data points and takes a proportion based on our x. Clearly, if you must generate or approximate functions-say sinusoids for sound or curves for motion-and you can reduce your memory requirements, you'll have more room to do other things.

In this column, I introduce an equivalent to Lagrange interpolation that involves less computation. It allows for the use of previous values in the calculation of future values (instead of having to start over as with the Lagrange) and its theory can easily be used as a basis for finite expansions similar to the infinite expansions of the Taylor series. Stuff like this is always of use in digital signal processing, but it will find plenty of value in other forms of processing as well.

Binomial coefficients

A binomial coefficient takes the symbol:

and is read: “r choose k” (k <= r).="" the="" binomial="" coefficient="" is="" the="" number="" of="" ways="" to="" choose="" a="" k-element="" set="" from="" an="" r-element="" set.="" this="" is="" known="" as="" having="" a="" combinatorial="" interpretation.="">

The upper number, r, is the set containing all choices for any subset. This means there will be r choices for the first element of a sequence or set, leaving r – 1 choices for the second, and so on until we have only r — k + 1 choices for the last. Thus:

r(r — 1)(r — 2)…(r — k + 1) (1)

represents the total number of choices.

A factorial represents the number of permutations of n things. The factorial of 4 breaks down like this:

4! = 4(4 — 1)(4 — 2)(4 — 3) = 24 (2)

There will be exactly n! possible permutations for any of our choices.

To take an example, let us say we have four letters: {A,B,C,D}. In this case:

says that we can choose two of those numbers in six different ways:

{A,B}, {A,C}, {A,D}, {B,C}, {B,D}, {C,D}

Although you may only be familiar with binomial coefficients from probability, the concept and its uses are ubiquitous throughout algebra and calculus. This will become apparent as we go along.

Table 1: Pascal's triangle

n
0 1
1 1 1
2 1 2 1
3 1 3 3 1
4 1 4 6 4 1
5 1 5 10 10 5 1
6 1 6 15 20 15 6 1

A rather interesting exposition concerning binomial coefficients is Pascal's Triangle, named for Blaise Pascal. Table 1 shows a small portion.

The patterns in Table 1 may be familiar. They appear often in polynomials, which is not a surprise when you look at Equation 1 and Equation 2, which are formed as the roots of polynomials.

You should be aware of some simple formulations for these binomial coefficients. These formulae demonstrate most clearly the development of the coefficients. You will see this pattern again and again in the algorithm and equations as we develop the forward interpolation:

There is another representation, however, that in many cases will help you produce the correct values as well:

This form is useful because it helps avoid some arithmetic problems that can arise in our calculations, such as singularities produced by dividing by zero.

Forward differences

It is possible to construct a table of differences based on any series of data points that will completely describe that series. This is a finite table of finite differences.

Table 2: Finite table of finite differences

x f(x) Δ0 Δ1 Δ2 Δ3 Δ4 Δ5 Δ6 Δ7 Δ8 Δ9
0 0 0 1 -2 2 0 -4 8 -8 0 16
1 1 1 -1 0 2 -4 4 0 8 16
2 0 0 -1 2 -2 0 4 -8 8
3 -1 -1 1 0 -2 4 -4 0
4 0 0 1 -2 2 0 -4
5 1 1 -1 0 2 -4
6 0 0 -1 2 -2
7 -1 -1 1 0
8 0 0 1
9 1 1

First, let's suppose a table of difference points, based on sampling a sinusoid at /2. This is shown in Table 2. The column under f(x) is actually the data collected based on the sampling formula:

(3)

That's not much information. Linear interpolation would produce very coarse results in a circumstance like this, as it can only draw a straight line between samples and take a proportion.

For our forward interpolation scheme, we must use both binomial coefficients and forward differences. Forward differences (or backward differences) are finite arithmetic's complement to the derivative. We can build the difference table using the relationships shown in Table 3. The 0th difference is the value of the function at the index point itself, which is 0. In b, we make the first difference equal to the difference between the function at the original index and at that index plus one, this turns out to be 1. After doing the math, notice here how the binomial coefficients in the table are reflected in these difference formulations.

Table 3: Difference table relationships





We can reconstruct our data points simply by summing the product of the differences on any line with the binomial coefficients associated with the starting and ending indices. In other words:

(4)

For example, if we start at index 0 (which means we use row 0), the value of the function at index 1 is equal to:

The value of the function at index 2 is equal to:

The value of the function at index 3 is equal to:

The sum of the differences relative to any rank combined with the proper binomial coefficients can reconstruct the data points of the original table. Of course, it would have to be this way for it to work properly as an interpolating polynomial fitted to the actual data points, but it is worth covering because it also demonstrates the actual interpolation process. That is because Equation 4 is the Newton forward interpolation formula fitted to the data points.

Forward interpolation

To make forward interpolation useful, we need to break it down a little more, and this occurs in the expression representing the value to be interpolated. In Equation 4, we have x, which, for our purposes, is the sum of a known data point. Here we will take x0 as the origin of our interpolating polynomial, and the value we wish to interpolate, multiplied by the grid upon which the data points are collected. By grid, I mean sampling interval. In our example, the data points are collected at intervals of the integer 1, in other words, 0, 1, 2, 3, 4, 5, and so on. If the sampling interval had been 0.5, then the indices would have looked like 0, 0.5, 1, 1.5, 2, and so on.

To make this work smoothly, we will create a value:

The x represents the value we wish to interpolate, x0 is the starting point of the interpolation formula (in this example, index 0), and grid is the sampling interval. Now, we re-write the formula slightly to accommodate this value, which may or may not now be an exact data point:

How well does it work? If I take the values in Table 2 and calculate a 9th order interpolation formula using the differences in row 0, I will get an approximation to the sinusoid depicted in Figure 1. Pretty good for values like 1, -1, and 0, isn't it?

But note the extremes. That's where the error is concentrated, just as it is in the Taylor series and in the Lagrange interpolator I showed you in my last column. In Figure 2, I've plotted the difference between the sinusoid calculated with a tighter sampling interval and our approximation.


Figure 1: Sinusoid overlaid with approximation calculated from data pointsin Table 2, Row 1


Figure 2: Difference between sinusoid and approximation

This interpolator can still be improved. Next time, I'll go into further detail concerning the algorithm for the Newton forward interpolator. I will show you how to start at any point in the data table in order to avoid the errors you see at the end points. You'll also see what Chebyshev functions can do.

For those interested in a deeper look at the workings of this algorithm, there is a MathCad document available at ftp://ftp.embedded.com/pub/2002/newtf.zip that will allow just that.

We're discussing the Newton forward interpolator, but there is also a backward interpolator that functions in a similar manner except for the direction in which we take our differences. Most of what I present here applies to both interpolators; the forward interpolator just fits our purposes best at the moment.

Until next time…

Don Morgan is a senior engineer at Ultra Stereo Labs and a consultant with 25 years of experience in signal processing, embedded systems, hardware, and software. His e-mail address is .

References

1. Graham, Ronald, Donald Knuth, and Oren Patashnik. Concrete Mathematics. New York: Addison-Wesley, 1994.

2. Acton, Forman S. Numerical Methods That Usually Work. Washington DC: Mathematical Association of America, 1990.

3. Hamming, R.W. Numerical Methods For Scientists And Engineers. Mineola, NY: Dover Publications, 1962.

Return to the August 2002 Table of Contents

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.