It's Not Over -

It’s Not Over

The fat lady hasn't sung yet, but she's warming up backstage. This month, Jack unveils part of his long-awaited minimizer.

It's ironic, don't you think, how art imitates life? When I began last month's column, I remarked that, amazingly enough, we didn't know for sure who was going to be our 43rd President. It took forever for that mess to sort itself out.

This column is sort of like that. When I wrote last month's column, I was sure that I'd have the code completed in time for this one. I had my inauguration speech all written and my best suit was already pressed. I promised a timely finality to my long-suffering editor.

Well, I came close-very, very close-but it's not quite over. I still have a few dangling chads, a few swinging doors, a few dimples, even a few cases demanding recounts. I am looking at the code cross-eyed, with magnifying glass in hand.

The good news is that the code I have in place is there to stay (I think). Two months ago, I convinced myself that a combination of a state machine architecture and lots of modularity would be the way to go. That decision proved to be right. I'm pleased with the new structure, and don't intend to change it anymore. It's just a matter of supplying all the missing chads.

This month, I'm going to lead you through the development process for the Ultimate Minimizer, Mark III. Fortunately or unfortunately, depending on your point of view, I won't be able to complete the process this month; there's simply too much code to explain.

I wouldn't say the new minimizer is complex. It isn't, but it has a lot of parts, each of which serves one specific and easily identified purpose. Those parts cannot be described, either in code or in words, in the space available to me here.

Understand, this month's code is not optimized. I never expected it to be. As I made clear over the last couple of months, I realized that the pressure to keep the coding tight is the main reason functions such as this, which are intended to be used over and over inside of larger for-loops, are so inscrutable. I deliberately sacrificed a bit (but only a bit) of efficiency for the sake of intelligibility. Some day, in the far distant future perhaps, I'll come back and tighten the code to the usual, inscrutable form. For now, I just want to have a routine I can use in earnest.

Developing the code

In the remainder of this column, I'll be walking you through the process that led to the function in its final form. At each step, I'll give you code that works, in the sense that it compiles and runs without crashing. As we progress with the development, it will solve the minimization problem with varying degrees of success, presumably more succesful as we go.

We'll begin with the notion of a state machine, which I first introduced in the December 2000 issue (“Oops, I Did it Again,” p. 15). That code is shown in skeleton form in Listing 1. The general structure is that of a counted loop in which the function bounces from state to state until it finds itself in a state that leads to the exit. Think of a pinball machine, and you won't be far off the mark. If all else fails, the function exits when the loop counter reaches zero.

I played around with several variations on the general theme, before arriving at the structure in Listing 1. Sharp-eyed readers may recall that the original code had an infinite loop, and the exit state was simply one more state. This time, I decided to go with a loop counter instead. A counter is needed anyway, to avoid infinite loops, so we might as well put it to work.

Some of the features of Listing 1 (for example, multiple exits, messing with the loop counter) may offend purists. If that includes you, feel free to adjust it to your liking, but the code will almost certainly work properly with “any decent compiler.”

Sharp-eyed readers will also see that the function now has only two x-value arguments, not three. That's because I decided to collapse two functions into one here, a choice that requires a bit of explanation. That decision is the reason the code isn't finished.

Recall that all of our minimization algorithms require that we have three points (a triplet) satisfying the condition:

f(x0) > f(x1) < f(x2)

In general, however, we begin not with three points, but with two-the two x-values representing the range to search for the minimum. The first step in the process is to step through this range with some step size deltax, until we find three adjacent points that satisfy this condition. It's only then that the real iteration begins.

Recently, we've been using two separate routines: one to find the triplet and one to use it. In the December column, I gave two functions. One of them, called minimize() , performed the stepping operation. Once it found the needed triplet, it called the iterating routine, findmin() , to track down the minimum. Now that I'm using a state machine, I think it's best to combine these two functions. A look backward to the December issue will reveal that the structure of minimize() was a state machine of sorts anyway, one with three distinct states: searching until the function decreases, searching further until it increases again, and finally, iterating on the located triplet. I see no reason not to make that structure more explicit. Among other benefits, this avoids the need to pass a lot of parameters through calling lists; the points can remain in place (actually, I'll be using global values anyhow, for now).

As a matter of fact, this seems as good a time as any to introduce you to the concept of nested state machines. I'll use a state machine for the step-search process, and have it return a state for the larger, parent state machine. As I mentioned in December, this is a structure I've used with great success in real-time programs in the past, so we are not as off the subject of embedded systems as one might think. One of the nice features of state machines is that you can set them up to perform one, and only one, pass per cycle of some real-time task. That way, the time required for the task is not too long, and fairly well fixed.

The changes needed to implement this new structure, however, required significant changes to the code I had shown in December.

One must remember that we have no guarantee that the stepping process will find a minimum at all. If it does not, this could mean two things. Either our choice of step size was too coarse, or the function has no minimum within the range. Instead, it could be increasing or decreasing monotonically (or staying constant, which is a degenerate case). If it's monotonic, our routine should return the left or the right limit of x, as appropriate. Finally, for reasons discussed in past issues, I intend to retain the last five adjacent data points, rather than three. I am not sure I can justify this decision, except to say that it makes me feel better. If I have five points, and they are arranged in a V-formation, such as:

f(x0) > f(x1) > f(x2) < f(x3) < f(x4)

I feel more comfortable that I have got the true minimum bracketed. I'll call this arrangement a quintet. It will come in handy when the input function is some kind of non-smooth function like the absolute value. With curves that look like parabolas in the end game, the two extra points don't help us much, but they do serve to validate the final convergence.

Adding meat to bones

We'll begin by setting up variables to manage the search and iteration process. Some of them will be familiar from previous columns. Some we won't be using for awhile yet, but I know I'll be needing them later. These variables are shown in Listing 2.

Note that I've included a mechanism for testing both x- and y-values for convergence. We've discussed this point before. I'm convinced that the way to assure that the predicted minimum is close to the true one is to compare the predicted y-value with the actual one.

Also, I'm maintaining separate values for the best estimate of the minimum so far. This is to make the logic for monotonic functions easier. It's part of my new philosophy of not trying to keep variable count to a minimum. That can always come later.

Note also that I've changed the scalar values of x and y to five-element vectors. Since I'll be trying to maintain up to five points, this seems easier than moving the scalars around. The management of these vectors is a bit tricky, because we start out with only two of the five values. We cannot be sure that we will come out of the step process with five points; we might, for example, find a triplet on the first try. To keep the logic manageable, I've added two integers, one to keep track of how many valid points we have, and the other to serve as the index of the lowest point. To keep myself sane, I'll be moving data around more than we might like (as opposed to tricks like swapping left to right, or circular buffers, or any other such constructs), and these integers will help with the bookkeeping.

We'll also need an initialization procedure. That is shown in Listing 3. It sets up the step parameters, the error criteria, and the initial step process. To allow for an initial rise in f(x), we always start with the assumption that it might increase for a time before decreasing again, even if y1 < y0 . Note that init() returns the next state value, even though it's always the same. This is the model we should follow for other called functions. They know-better than the driver function minimize() -which state we should be going to next.

In the function init() , I've set relative error criteria that reflect the total range of x and y, over the interval given. The range in x is, of course, fixed by x0 and x1. The range in y, though, can vary as we evaluate new points, so its computation is dynamic. To deal with that, I've written a function set_y_range() , which is also shown in Listing 3.

The calling list for function minimize() now looks more familiar:

double minimize(double (*f)(double), double X0, double X1, long steps, double eps)

Note carefully the subtle change in the names; I'm using upper case for the input values, and lower case versions in global storage. A few extra lines in init() copy the data. Of course, I realize these are unnecessary copies, but they only happen once, and they relieve us of the need to pass more parameters.

The content of state INIT changes only in a single line:

case INIT:state = init(f, X0, X1, steps, eps);break;

Step up to the plate

Now it's time to set up the step-search engine. Perhaps we should pause to remind ourselves why we even bother to step along the function at all. When we first began working with this minimization problem, we simply generated a third point by bisecting the original interval. That approach works fine if the bisected point does, in fact, lead to a y-value smaller than the bounding points. Even if the value is only a tiny bit smaller than either y0 or y1, we have a triplet of sorts, and should be able to dive right into quadratic interpolation.

But what if it doesn't? What if, for example, the bisected value is larger than either y0 or y1 ? In that case, we have no idea whether to look for the minimum to the left or right of it. Even worse, what if it's, say, halfway between the two bounding y-values? Then the function could be monotonically increasing or decreasing. Or it could have a minimum in either half-interval. We can't really tell which is the case without exploring in both sub-intervals.

Questions like these led me to conclude that we really need to find a triplet before we can get into the meat of the iterative process. One of the key rules of the minimization is that we start the iteration with a triplet, and are very careful to hang onto one as the iteration progresses. The easiest and most reliable, though perhaps not the fastest, way to get into the triplet configuration is to step along the function until we find that triplet. In words of few syllables, initial bisection works fine if the function is well behaved. The stepping process is there in case it isn't.

You may recall that we also addressed whether we should allow for the input function increasing or remaining constant for a time before we get to a final decrease. We make no guarantees for cases where multiple minima exist in the region of interest, so if the function increases at the beginning, we are within our rights to declare the left-hand point, x0 , as the minimum. However, this seems a little unfriendly. Some useful cases occur in practice, and it would be nice if our minimizer allowed for an initial increase. This complicates the step-search procedure more than a little, but I decided it was worth the extra complication.

Function step() takes care of all this, stepping along in x, while looking for the value of y to decrease for the first time. Note that this value may be lower than y1 , in which case we have a triplet. If it is, there is nothing (except some awkward bookkeeping) to keep us from going right into a quadratic search. Otherwise, we must continue stepping until a triplet is found or we reach the end of the range. If we do, the function must be monotonic, and some extra logic is required to deal with this. For this case, we must bypass the quadratic search, and a nested state machine is the perfect way to see that we do.

The driving state machine for the step search is shown in Listing 4. As you can see, it can have several return states, depending on what we find while stepping.

The big complication in this function comes from my desire to maintain five data points. Because of this, we treat the first few steps differently. If I really wanted to use the state machine concept to its fullest extent, I could have set up separate states for each of the first three steps, but that seemed excessive. Instead, I used the variable npoints to keep track of how many points we have at any given time. Until npoints = 5, each new point gets added to the vector array. Afterwards, the data gets shifted in. If we find a triplet, the variable index should be left pointing to the middle point.

Remember, the triplet condition is found when the function decreases for a time, and then starts increasing again. Because function step() is supposed to stop stepping when this happens, if we find a triplet at all, the minimum point should be at x[3] . Please note that the input value, X1 , does not appear in the array. If it did, we could find a triplet much sooner, but it would be bogus, like the value obtained from an initial bisection. We're only going to allow triplets found during the stepping process, not those formed by the initial right-hand endpoint. That endpoint will get its turn when and if we reach it by stepping.

As often happens, the bugs are found in the edge conditions. What happens when a minimum is found, thus forming a triplet, on the very last step before we hit the edge? This condition occurs, in fact, for three of the six test functions I've defined. If you try it, you'll discover that the function gets it right. It also properly handles monotonically increasing and decreasing functions.

The stepping function requires one new, small function, test_min() , which compares the current point, given by xx and yy, to the previously set minimum. This function returns a boolean variable. It's also shown in Listing 4.

Naturally, the addition of the step search algorithm requires a few minor changes to the outer state machine. Among other items, I've added a state called HAVE_TRIPLET to function minimize() . I'm sure you can guess what that's for.

To make sure no mistakes are made in version control, my complete test file, including #include statements and the test driver program, is available at This code also incorporates the six test functions I've defined, as well as debug print statements sprinkled through the code. Because this code comes straight from my test program, it danged well better work. I've tested it on all six test functions.

Please bear in mind that the program includes two files from my library, constant.h and jmath.h , and their corresponding C++ files, taken from my book. They are straightforward, but necessary, because they include overloading of the functions abs , min , and max , as well as definitions of TRUE and FALSE . To try the program, you don't really need the entire files, only the relevant lines. I've posted those on as well. That's all we have time and space for this month.

What's that you say? The function doesn't find a minimum? Sure it does. It just doesn't find it very accurately. To get more accuracy, simply set steps to about 1,000 or so. That takes us all the way back to the step search with which we started this odyssey, those many months ago. That will have to hold you until we get the iterative steps back in next month. See you then.

Jack W. Crenshaw is a senior principal design engineer at Alliant Tech Systems. Much of his early work was in the space program, and he has developed many analysis and real-time programs. He holds a PhD in physics from Auburn University. Jack enjoys contact and can be reached via e-mail at .


Return to Table of Contents

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.