On Getting It - Embedded.com

On Getting It


Being digital, embedded systems “think” in discrete time, but they interact with a continuous-time world. Jack's got the key to bridging that gap.

Tell a joke at a party. Some folks laugh, some don't. Let a preacher preach a stirring sermon. Some are moved, some aren't. Listen to a beautiful concert, a stirring anthem, a gripping opera. Some respond. Others don't.

What's the difference? Assuming that you're not just a bad storyteller, some folks get it; some don't.

A few years back, I used to give a tutorial called Math for Programmers Who Hate Math at the Embedded Systems Conference. I covered essentially all of applied math, from arithmetic to matrix calculus. Some liked the paper, many didn't. On those evaluation sheets, many attendees wrote, “Not applicable to my work.” Some wrote, “Too much theory, not enough practical application.”

So I completely revamped the paper. I decided to gear the entire tutorial to a specific and concrete application: simulation of dynamic systems. I covered the physics, the math, numerical analysis, even 3D graphics, all with an eye towards this one application. I covered much the same material as before, but always with a specific use in mind. In the end, I gave every single algorithm one might need to implement, say, a flight simulator.

Some folks loved the talk and some didn't. One attendee heard two others say, on their way out, “This guy told us how to do everything we've been trying to do.” Others wrote on their evaluation sheets, “Too much theory, not enough practical application.” Practicality, it seems, is in the eye of the beholder.

Getting it on time

Sometimes, getting it depends on timing. In college, I was taught how to solve problems in dynamics. Dynamic systems obey Newton's laws of motion, and these laws ultimately lead to a set of differential equations we call the equations of motion. Solve the equations of motion-either by analytical or numerical means-and you've solved the problem. But first you have to write down those pesky equations.

As an undergraduate, I was taught a primitive method for doing this called free-body diagrams. In grad school, I was exposed to more advanced methods: Hamiltonian and Lagrangian mechanics. Don't worry, you don't need to know what these are. You only need to know that they lead, seemingly by roundabout and mysterious ways, to the same equations of motion as one gets using free-body diagrams.

So what's the difference? At the time, I couldn't see any, except that the advanced methods seemed to me like going around the Moon to get to the next town. Free-body diagrams were fine with me.

I remember thinking, “Interesting concepts in theory, but not enough practical application.”

Then I went out and worked in the real world, simulating dynamic systems for the space program. I did it the straightforward way, using free-body diagrams. Often I got the equations wrong, because free-body diagrams, while straightforward, are also easy to muck up. Forget to include one term, and everything turns to dross.

Four years later, I went back to school, and got re-exposed to Lagrangian and Hamiltonian mechanics. My reaction this time was, “Wow! I can really use that!” In short, I finally got it.

What had happened to change my reaction? Certainly the methods had not changed; they were over 100 years old. It was my understanding that had changed. Having done things the straightforward (and error-prone) way and, on occasion, screwed up, I suddenly appreciated what Lagrange and Hamilton were about. I realized the advanced methods weren't just interesting theory; they were not only about getting the equations of motion, but getting them right. Roundabout? Yes. But also turn-the-crank, follow-the-rules solutions. No-brainers. Write down certain obvious things about the system (like its kinetic energy), apply the methods, follow the rules, and out come the equations of motion. The correct ones, guaranteed. Sure, you have to be able to do math, algebra, and calculus, but you don't really have to understand the details of the system. The methods ensure that the equations are 100% correct.

Is that useful? Is it practical? You bet. A few years later, I was asked to simulate about the most complicated dynamic system imaginable: a tank firing a missile from a tube launcher, complete with the tank's suspension system, the other missiles rattling in their tubes, even the shaking ground. The equations of motion had already been derived at the U.S. Army Missile Command. However, using Lagra-ngian mechanics, I was able to show they got them wrong. Oh, they argued and defended their derivation for a time. But Lagrangian mechanics requires that one of the system matrices be symmetric. Their's wasn't. End of debate.

Ever since, I've been a vociferous advocate of advanced methods for whatever problem ails you. I got it. Now, I spend my time both using it and sharing it.

Looking back, I think the mistake I made with my lecture was in trying to cover too much ground. In trying to cover so many neat ideas in one talk, I managed to obscure the value of any of them. Most attendees didn't get it, and that was my fault.

I'm getting a similar reaction to my book, Math Toolkit for Real-Time Programming (shameless plug). I immodestly like to think its pearls of wisdom are numerous. Having more room than I do in a typical column, I tried to explain each idea in even greater detail than I do here. Even so, I think many readers miss some of the more useful pearls. They don't get it.

On wanting it

I'm therefore changing my approach. From now on, I promise (to paraphrase the old Heathkit motto):

I will not let you fail to get it.

Understand, though, that I need some help. You can't get it if you don't want it, and not all folks do. Some want to have it done for them. Some want canned, compilable software that they need only include in their project, without even trying to understand what it does. You won't find that here. I can explain “it” to you, but you have to do your own thinking. You have to want to understand. If you don't understand, I'll explain again, but you have to at least try. Deal?

The Rosetta Stone

I can't think of a better place to begin than my favorite equation:


which I like to call the Rosetta Stone of computer math. To me, it's just as important and powerful as Einstein's famous equation, E = mc2 . It's in my book. It's also in the infamous conference paper. But most folks don't seem to get it. This month, I'm going to try again.

We live in a continuous world (unless you want to get down to the level of quantum mechanics). Dynamical systems in the continuous world behave, um, continuously, with motions described by differential equations of motion.

While the systems that we model, analyze, and control are fundamentally continuous, the computers we use to work with them deal only with discrete events and discrete measurement times. The temperature of a chemical vat, for example, is surely a continuous function of time. When working with digital computers, we never get to see the continuous function. The best we can do is to sample the function at specific times and generate a table of its values. From such a table, we can infer the nature of the function and, usually, approximate its values between measurements. However, we never really know what the true function is; we can only infer it from a discrete set of measurements.

This, then, is the dilemma of controlling real systems using digital computers. The real systems live in a universe where time is continuous and things change smoothly over time. The computers, on the other hand, live in a universe of clock ticks, A/D and D/A converters, and sample-and-hold measurements. This is the world of discrete time. The challenge of numerical analysis is to take discrete samples of the real world, analyze them, and predict and control them in such a way that the resulting behavior seems, to both the real system and any observers who happen to be watching, to operate according to continuous-time rules. So, is controlling a continuous system with a digital computer a practical thing to do? Yes, of course it is.

Equation 1 is the connecting link between the two worlds, and that's why it's so important. To derive it, let's begin with the power series expression for the exponential function:


No, really. Don't skip over the equation. Back up and look at it. It's an infinite series and is guaranteed to converge for all x. I'm sure you can see the pattern. If not, the nth term should give you a clue. For those of you not familiar with the notation, the ! character stands for the factorial function:

The Taylor series is another infinite series, this time for extrapolating some function f(t). Taylor's notion is simple enough in concept, though its form tends to make your eyes glaze over. Taylor conjectured that if we know enough about the function at some point t, we can predict what its value will be at some other point, t + Δt. The things we need to know are the values, not only of the function itself, but all its derivatives:

all evaluated at t = t.

You have to understand calculus to get this particular point. Conceptually, we can get the derivatives by analytically differentiating f(t). Typically, each derivative is messier than the previous one. But you don't need to be able to compute the derivatives; just accept that they exist and have definite values at t = t.

Given those explanations, I can now show you the Taylor series:


It's messy, but be patient; we're going to fix that soon. Just remember, for now, that it's assumed that all the derivatives exist and are evaluated at t = τ. That's important.

Now compare Equation 2 and Equation 3. Notice the similar structure. We can make that similarity more evident with some changes in notation. These changes do not alter the equation in the slightest; they only change the way we write it down. The first change is easy. Replace Δt by h (a conventional symbol for step size).

The second change is more profound. Let the symbol D stand for the derivative:

Note that D isn't a derivative-yet. It's an operator, waiting for some function to operate on.

Continuing, let:

This notation makes perfect sense if we recall that:

Making just these two changes in notation, Equation 3 now becomes:

Now I'm going to do something that seems completely crazy at first, but makes a certain perverse sense. I'll blithely factor out the f(τ) on the right-hand side, treating the symbol D as though it were any other algebraic symbol. This leaves us with:

I'm sure you can guess what's coming next. The power series above is the same as in the exponential function in Equation 2. Having gone this far, I can simply substitute the form:


In a sense, I haven't really changed the Taylor series one iota. Equation 4 means exactly the same thing as Equation 3. The only thing that's changed is the notation. To evaluate the series-assuming I wanted to-I would still have to expand the series as in Equation 4, then apply each D operator to f(τ).

However, in another sense, I've changed everything, because I've suggested that the operator D can be manipulated just like any other symbolic variable, without regard for whether or not the manipulation makes sense.

I've made this leap of faith without much to back it up. This approach was first introduced by Oliver Heaviside, sometime around 1880. He called it operational calculus. It was not well received by mathematicians, who blanched at the notion of assuming that one could toss around derivatives so casually, with no formal proof of correctness. Many of the papers Heaviside submitted to scientific journals were rejected somewhat rudely. For decades the scientific community was bitterly divided between those who rejected Heaviside's operational calculus on the grounds that it had no theoretical basis and those who blithely used it to get practical results. Eventually, however, mathematical theory caught up with engineering practicality, and the operational calculus was given a mathematically rigorous basis. Trust me. It works.

The leap to discreteness

We're one step away from Equation 1. To get there, we must leave the continuous world and move to the discrete world of computers. Imagine that we sample the function f(t) at discrete points, t0 , t1 , t2 , . . . , tn , and so on. Assume further that the t 's are evenly spaced, with step size h.

Let's record the values in a table, and call them x0 , x1 , and so on. Thus, xn = f(tn ). For simplicity, we might as well assume that t = 0 when n = 0. In that case:

That allows us to write Equation 4 as:

or, even more concisely:


Now I'm going to introduce one more operator, called z, which advances the location in the table by one time step. That is:


If you compare Equation 5 and Equation 6, you have to conclude that:


Since we haven't said anything about the nature of f(t) or xn , Equation 7 must hold regardless of that nature, and we can factor out xn to get Equation 1.

I can't emphasize the importance of Equation 1 enough. It provides a link between the discrete-time world of computers, as embodied in the operator z, to the continuous-time real world, embodied in the operator D. That's why I call it the Rosetta Stone.

Get it?

What's that you say? You still don't get it? Don't worry, you will. I promise.

I've given you too much theory, not enough practical application? Well, let's see. Does your application involve a digital computer? If so, you're automatically in the world of z. Does the application involve interacting with the real universe? Does it, for example, read data from an external sensor, or send data out to an actuator? If so, that software is in the world of D.

In that case, believe me. You really, really need Equation 1. It's the bridge-the only bridge-connecting the two worlds. You just haven't been convinced of its value yet, because you haven't seen how it develops from a simple but esoteric formula into practical equations and, in the end, working software.

Currently, you're at the same place I once was with Lagrangian mechanics. You haven't seen, yet, how to get from the defining equation to practical results. But you will.

Jack W. Crenshaw is a senior software engineer at Spectrum-Astro. He is the author of Math Toolkit for Real-Time Programming, from CMP Books. He holds a PhD in physics from Auburn University. Jack enjoys contact and can be e-mailed at .

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.