# Line-fitting algorithms for exceptional cases–minimax line fitting

In many embedded systems design applications, line-fitting techniques, particularly minimax algorithms, are used instead of the more well-known regression methods to “fit” noisy data. More often than not, minimax fitting is preferred to regression when the goal is to find a fitting line with minimum tolerance, such as some automatic control applications.

My first exposure to minimax line fitting came via an engineer who asked me to help him fit some sampled sensor data. He was getting unacceptable results with regression and thought that minimax might do better. The engineer told me that he wanted something quick and simple and that small code size was required, but that high performance was not a priority because the computations would be done off-line.

This article presents my own attempts to meet these requirements. It describes both useful strategies and pitfalls to avoid in tackling similar problems.

Regression does not always give the “best” linear fit

By far the most common way to fit noisy data with a line is to use simple regression. If the noisy data consists of points (x_{n} ,y_{n} ), n=1,â€¦,N, then the regression equations give:

*Y* = *Mâ€¢X* + *B*

where *M* and *B* are defined in terms of the accessory values Î£_{X} , Î£_{Y} , Î£_{XX} , and Î£_{XY} as follows:

Î£_{X} = the sum of the x-values = Î£_{n} x_{n}

Î£_{Y} = the sum of the y-values = Î£_{n} y_{n}

Î£_{XX} = the sum of the x^{2} -values = Î£_{n} x_{n} ^{2}

Î£_{XY} = the sum of the xy-values = Î£_{n} x_{n} y_{n}

M = (Nâ€¢Î£_{XY} -Î£_{Y} Î£_{X} ) / (Nâ€¢Î£_{XX} -Î£_{X} Î£_{X} )

B = (Î£_{Y} – Mâ€¢Î£_{X} ) / N.

Nonetheless, there are many cases where regression is *not* the best option, for instance when:

â€¢ a *minimax fit* is required (in other words, one that minimizes the maximum deviation between line and data);

â€¢ the noise varies systematically in such a way that it affects the linear trend;

â€¢ the linear fitting is taking place real-time and is continually adjusted as new data comes in;

â€¢ there is a possible alias in the y-value (for instance with angle data, there may be an ambiguity of Â±*k* â€¢Ï€ or Â±2*k* â€¢Ï€);

This article will discuss minimax fit. Additional articles, which will appear later this month on Embedded.com, will deal with the other cases.

**Minimax fitting: polynomials and lines**

Minimax fitting is preferred to regression when the goal is to find a fitting line with *minimum tolerance* . This makes it appropriate for some automatic control applications.

The difference between minimax and least-squares fitting is most apparent when the deviations are not symmetrical, as shown in **Figure 1** . The regression line follows the **bulk** of the data, while the minimax line tries to “split the difference” between high and low deviations. In the case shown in Figure 1, the minimax line gives an error tolerance about 40% smaller than that achieved via regression.

Click on image to enlarge. |

**Libraries of downloadable algorithms**

At the time I started working on the problem, I was not aware of a linear minimax fitting algorithm. However, I did know of a general method for polynomial minimax fitting called the *Remez algorithm* (polynomial fitting includes linear fitting as a special case). There are numerous references to the Remez algorithm on the web (including Wikipedia) and in numerical analysis textbooks.

With a little searching, I found that there is downloadable Remez code available from various Web libraries. **Table 1** summarizes the results of my investigation. Unfortunately, all the codes in Table 1 are designed for general purpose and were far too large and complex for the specific application I wanted.

Click on image to enlarge. |

My first thought was to try to simplify one the codes, based on the documentation provided. As shown in Table 1, only two libraries provided documentation of the algorithm, while the others gave references to the literature. I wanted to avoid digging through textbooks and journals if possible, so I started hopefully through the www.boost.org article on the Remez algorithm. My enthusiasm was somewhat dampened when I encountered several mathematical statements that I could easily prove were not true. My guess is that the article was written by an implementer who did not fully understand the algorithm–caveat lector!

**The heart of the algorithm**

The Mathworks documentation on the other hand I found to be accurate and very understandable, with clear, accurate, and illustrated with excellent figures. There I found the following pearl of wisdom:

Chebyshev â€¦ gave the criteria for a polynomial to be a minimax polynomial. Assuming that the given interval is [

a,b], Chebyshev's criteria states that ifP(_{n}x) is the minimax polynomial of degreenthen there must be at least (n+2) points in this interval at which the error function attains the absolute maximum value with alternating sign.

Now here was something I could take to the bank! In the case of a linear fit (*n* =1), there had to be (at least) three data points where the maximum error was achieved (here we denote the three points as *P* _{1} , P_{2} , and P_{3} where *P* _{j } =(*x _{j} * ,

*y*) and x

_{j}_{1}

(A) P_{1} , P_{2} , and P_{3} lie above, below, and above the fitted line, respectively;

(B) P_{1} , P_{2} , and P_{3} lie below, above, and below the fitted line, respectively;

These two possibilities are shown in **Figure 2** . Thinking geometrically about these two possibilities quickly led to several conclusions (shown in **Figure 3** ):

â€¢ P_{1} and P_{3} must be consecutive points in the convex hull^{1} of all data points. This must be true because all data points either lie below the line P_{1} P_{3} –as in case (A)–or above the line P_{1} P_{3} , as in case (B).

â€¢ P_{2} must also be a vertex point on the convex hull of all data points. This holds because all data points lie either above (case A) or below (case B) the line through P_{2} that is parallel to P_{1} P_{3} .

Click on image to enlarge. |

Click on image to enlarge. |

â€¢ Segment P_{1} P_{3} must be parallel to the fitted line. This is necessarily true because P_{1} and P_{3} have the same error, hence the same vertical distance to the line.

These findings prompted the following straightforward prescription for an algorithm:

- Find all vertices of the convex hull of all data points. The vertices are divided into two sets: upper envelope (U) and lower envelope (L).
- For every pair of consecutive vertices in U (denote these vertices as (u1,v1) and (u2,v2)):

2.1. For all vertices (x,y) in L with u1â‰¤x â‰¤u2, find the vertex that has the largest vertical distance to the upper convex hull. This vertical distance is given by:

D = xÃ—(v2 – v1)/(u2 – u1) + (v1Ã—u2 – v2Ã—u1) / (u2 – u1) – y

- Select the upper vertices (u1,v1)
^{+}, (u2,v2)^{+}and lower vertex (x,y)^{+}that corresponds to the overall largest vertical distance D^{+}found in 2. - Repeat steps 2 and 3, switching the role of upper envelope (U) and lower envelope (L). Select the lower vertices (u1,v1)
^{–}, (u2,v2)^{–}and upper vertex (x,y)^{–}that corresponds to the overall largest vertical distance D^{–}. - a. If D
^{+}D^{–}, then the minimax line is parallel to the segment [(u1,v1)^{+}, (u2,v2)^{+}] and lies a vertical distance D^{+}/2 below the segment;

b. If D^{+}< D^{–}, then the minimax line is parallel to the segment [(u1,v1)^{–}, (u2,v2)^{–}] and lies a vertical distance D^{–}/2 above the segment;

The key remaining task was to obtain an algorithm for finding the convex hull. A Google search on “convex hull algorithm” quickly led to the “Graham scan” article on Wikipedia, which even contains a pseudocode. In the case at hand, the algorithm was made even easier by virtue of the fact that the data was sorted in order of increasing x-coordinate, and the x-coordinates were evenly spaced.

**Postscript: More than one way to skin a cat â€¦**

As I was researching for this article, I discovered a similar (but not identical) algorithm in the technical literature [Rey and Ward, 1987].

**Prototype code**

The prototype code in **Listing 1** is written in Octave (an open-source Matlab-compatible language). Caveat: No serious attempt has been made to optimize the code either for reduced code size or faster run-time.

**Listing 1:**

Click on image to enlarge. |

Part 2 and 3 of this article will be online on Embedded.com in the near future.

**Chris Thron** is currently assistant professor of mathematics at Texas A&M University Central Texas, and does consulting in algorithm design, numerical analysis, system analysis and simulation, and statistical analysis. Previously Chris was with Freescale Semiconductor, doing R&D in cellular baseband processing, amplifier predistortion, internet security, and semiconductor device performance. He has six U.S. patents granted plus three published applications. His web page is www.tarleton.edu/faculty/ thron/, and he can be reached at .

**Endnotes:**

1. The convex hull of a set S is the smallest convex set containing S. Alternatively, it is the intersection of all half-planes that contain S.

**References:** **boost C++ libraries resources for minimax polynomial fitting:**

â€¢ Minimax Approximations and the Remez Algorithm. Available from www.boost.org/doc/libs/1_36_0/libs/math/doc/sf_and_dist/html/math_toolkit/toolkit/internals2/minimax.html; accessed 18 March 2010 (describes code)

â€¢ The Remez Method. Available from www.boost.org/doc/libs/1_36_0/libs/math/doc/sf_and_dist/html/math_toolkit/backgrounders/remez.html; accessed 18 March 2010 (describes Remez algorithm)

*Mathworks resources for minimax polynomial fitting:*

â€¢ Tawfik, Sherif. Remez Algorithm, available for download at www.mathworks.com/matlabcentral/fileexchange/8094; accessed 18 March 2010. (includes Matlab code and .pdf documentation)

*NAG library resources for minimax polynomial fitting:*

â€¢ NAG Fortran Library Routine Document, E02ACF. Available from www.nag.co.uk/numeric/Fl/manual/pdf/E02/e02acf.pdf; accessed 18 March 2010.

â€¢ NAG Toolbox for Matlab e02ac. Available from www.nag.com/numeric/MB/manual_22_1/pdf/E02/e02ac.pdf; accessed 18 March 2010.

*Netlib library resources for minimax polynomial fitting:*

â€¢ ACM Algorithm #414, “Chebyshev Approximation of Continuous Functions by a Chebyshev System of Functions.” Available for download from http://www.netlib.no/netlib/toms/414; accessed 18 March 2010.

â€¢ Golub, G.H., Smith L.H. Chebyshev Approximation of Continuous Functions by a Chebyshev System of Functions. Available from ftp://reports.stanford.edu/pub/cstr/reports/cs/tr/67/72/CS-TR-67-72.pdf (This report documents the Netlib Algol code)

â€¢ Rey, C. and Ward, R. “On Determining The On-Line Minimax Linear Fit To A Discrete Point Set In The Plane.” *Information Processing Letters 24* (1987), 97-101.