Classic Crenshaw: All about vectors
Understanding the intimate relationship between vectors and reality means means understanding vectors, their properties, and how to manipulate them.
Sharp-eyed readers may have noticed that I've been AWOL in these pages for several months. I've already given my mea culpas so I won't repeat them here, except to reaffirm just how much I hate Microsoft in general, Windows 7 in particular, and Office 2010 in summa particular. Lately it seems I spend most of my time just getting my computer to behave tonight, the way it did this morning.
The AWOL-ness is particularly frustrating because I was interrupted in the middle of a series of columns on state estimation theory, building up to the justifiably famous Kalman Filter. I'll get back to that series soon, I promise. But for this month I'd like to make a new start. Think of it as me dipping my metaphorical toe in the shallow waters, before wading out again into the deep stuff again.
I've been writing the Programmer's Toolbox column for over 20 years now. Each month, it gets harder to say what I have to say, without repeating myself. I like to think I know a little bit about a lot of things, but I don't know about many that I haven't already told you about, somewhere down the line,
On the other hand, the passage of those 20+ years mean that a whole new generation of readers has come on line – readers who haven't seen the old stuff, but could benefit from it. To enlighten them, I've proposed to my editors a new sub-column called something like Crenshaw Classics. In it, I'm hoping to polish up, refresh, and update some of my more popular columns of yesteryear.
Understand, I'm not proposing to just present Xerox copies of the old columns. I'd write them from scratch, using new and more modern tools and technologies. But I'd still base them on the more popular topics of the past. I don't intend to limit myself just to past columns, either. It's my intent to drag in stuff from old conference papers, my book (Math Toolkit for Real-Time Programming), and its two companion volumes that never got published.
This column represents the first in that series. If you like the idea, please don't be shy about letting us know.
What goes around . . .
As most of you know, the majority of my experience with embedded systems has come from the world of Aerospace, building flight software for spacecraft and missiles, and the simulation and analysis software needed to design and test the flight stuff. Those kinds of applications tend to be very heavy on math algorithms, including vector and matrix algebra and numerical calculus. That's the reason my columns have tended to emphasize such topics.
But time moves on, and lately it seems that the term "embedded systems" has come to mean smart phones, smart tablets, and game consoles. I don't mind telling you, there have been times when I felt that this particular parade had passed me by.
But time moves further on, and sometimes things come full circle. I'm sure you've heard that the flight computer in Apollo 11 had less computing power than your average $19.99 digital watch. In the same way, the navigation systems that we used to sell to for six-figure prices have found their way into everything from model helicopters to Wii wands to game consoles to smart phones and tablets. Likewise, the super-accurate simulations that we used to develop for simulating missile trajectories have found their way into video games, complete with exquisite 3-d graphics that we would have killed for. Suddenly, the embedded world needs vectors and matrices again.
We've got your vector, VictorLong-term readers of Embedded Systems Design and Embedded.com know that I'm particularly enamored with the use of vector and matrix math in computer models--almost to the point of an obsession. Perhaps it's only a matter of the baby duck syndrome; my first Fortran II code was a library of vector subroutines. But I like to think my reason goes a little deeper than that. I like to use vectors because vector and matrix math makes life so much easier for us analysts and programmers. And I'm all for easy. The first time I wrote about vectors was around 1988; the last was in 2007, when I presented a C++ library. In that most recent series, I talked a lot about how to implement the vector and matrix operators, but I didn't say anything at all about why they make things easy.
-- Air traffic controller in Airplane
When we're interfacing with the real world, we like to use vector and matrix math precisely because the universe uses them. Even if we'd never heard of vectors before, we'd soon find them to be essential, because they're written right into the laws of nature.
In this column, I'll be talking about vectors, their properties, and how to manipulate them. But mostly, my focus will be on that intimate relationship between vectors and reality.
Draw me a map
Let's begin at the beginning. Imagine you live in a particularly boring city like Manhattan, where the streets are laid out mostly in a grid pattern, as in Figure 1. Streets run east and west; avenues, north and south (well, Ok, in Manhattan they don't actually align with compass points. Manhattan is actually askew, like many of its residents. Just work with me here).
Click on image to enlarge.
Now suppose I want to give you directions from your place to mine. I could just say, "I'm on the corner of 3rd Avenue and 8th Street," or even "3rd and 8th." (We won't even discuss the notion of "East 8th Street North"; it brings back too many painful memories of a blind date that never happened.) In the vernacular of computer science, the term "3rd and 8th" would be my absolute address. Anyone in the city could find me there. Extend that concept a little bit, and I could give you my GPS coordinates: latitude and longitude. Then anyone in the world could find me.
But there's another way to give you directions, and anyone who's ever used Mapquest or Tomtom knows it: I could give you my relative address, in the form of a set of navigation instructions. I could say, "Go eight blocks east, turn left, then go three blocks north." You could find me that way, even if all the street signs had been blown down. The path you need to follow is the red path in Figure 1.
Of course, it's not the only possible path. You don't need to follow my directions to the letter. If you decide to take the scenic route, or avoid that rough neighborhood in the Lower East Side, you might prefer to meander along the blue path. If your goal is to get to my place, it really doesn't matter which path you follow; only that you end up having gone those eight blocks to the East, three blocks to the North.
As the crow flies
No doubt, you've recognized Figure 1 as nothing more than a Cartesian coordinate system in disguise. I could have left out the cute pictures of blocks and streets, and just shown the coordinate system, as I have in Figure 2. What's more, I don't even need to show you a route. If I want to keep things bone simple, I only need to give you that relative address. You're free to figure out the route for yourself. In the map, that relative address is embodied in the red arrow. It's the way you'd go from your place to mine, if you could fly.
Click on image to enlarge.
Scalars and vectors and tensors, oh my
In grade school, they taught me all about numbers. A number could take on some value, like 1, 2, 3/4, or 1.235. Later I learned about irrational numbers like π or , and even later, imaginary and complex numbers. But still, in the end, just numbers.
Physicists see numbers a little differently. We use them to represent things in the real world; things whose values we can measure with a meter of some sort. To a physicist, a number can represent a voltage, pressure, temperature, mass, or weight (not the same thing!). What's more, in physics numbers tend to have units like volts, amperes, kilograms, kilometers, or kilogauss.
The mathematical entity that can be represented by a single number is called a scalar. It has a magnitude, and it may have units, but still a single number,
It's pretty obvious that the red arrow in Figure 2 is not a scalar. I can't just give you its length (which happens to be blocks). You also need to know which direction to go; which heading, if you're Kareem Abdul-Jabbar. A thing that has both magnitude and direction is called a vector, and that's what's represented in Figure 2. By convention, we typically represent it as a line with an arrowhead on one end.
Physics is full of vectors. You can hardly walk through a physics class without tripping over one. The relative position of Figure 2 is an example. Other vectors include velocity and acceleration (which are the first and second time derivatives of position), force, or magnetic field strength.
Now, we physicists are a lazy lot. We like to write things in the most concise way possible, boiling the information down to its essence. We like to use single-digit names for variables, and compact representations instead of wordy ones. If I want to give you the directions from your house to mine in the most concise way possible, I only need to give you the two numbers, 8 and 3. As long as we agree which number represents eastward, and which northward, that's all you need to know. So I can give you my relative position as:
Or even more concisely, (8,3) or . All the forms are equivalent; we only need to agree as to the rules. In general, we can describe the vector as:
Note the use of boldface font for the name of the vector. That's the convention for vectors and matrices. We used to write things like , , or , but that's only because our pencils didn't have a boldface setting. Nowadays, boldface fonts are the way to go.
Where are you in this figure? Why, you're at the origin, , of course. Your location, relative to yourself, is a vector of length zero. And what's the path from me to you? It's -r. To display -r, we only need to reverse the direction of the arrowhead.
As fundamental as all this seems, we've already defined one mathematical operation on a vector. It's the negation operator. If , then:
In other words, we change the sign of both components.
Let's change the situation a bit. Suppose that, on your way to my place, you plan to stop by Bill's house to borrow his new video game. Now your route gets slightly more complicated. Now you need two vectors: One from your place to Bill's, another from there to my place. Figure 3 depicts your new route.
Click on image to enlarge.
Here I've shown the two legs of your journey as r1 and r2. When the trip is over, the end result--the resultant--is our original red arrow.
It's pretty obvious that on this new route, you still have to traverse the same eight blocks east, and three north. In other words, the two x-components of r1 and r2 must add up to eight. Likewise for the y-components. Mathematically, we can write:
Or, in vector form:
Now we see the rule for adding vectors: you simply add each set of components.
Would it surprise you to learn that to subtract one vector, you subtract each set of components? Probably not.
As simple as all this stuff is, a mathematician should be happy because we've established a couple of fundamental properties:
(Did you catch the boldface font on the zero? It's still a vector, but it's a vector of zero length.)
Now that we know how to add and subtract vectors, there is nothing to keep us from adding more than two of them. At any given instant of time, my position vector is the sum of my position at this desk, plus the position of the desk in the room, the room in the house, the house on the lot, the lot in the city, etc., etc., all the way out to the center of the universe (wherever that is). All the usual rules of commutative operators apply, because all the operators end up operating on the scalar components of the vectors.
Vectors can be scaled. If r is a vector, and can be represented as an arrow, what do you suppose 2r is? Why yes, of course, it's an arrow having the same direction as r, but twice as long. Mathematically:
So what is 0r? It's 0, of course.
If we can add and scale vectors, we can also generate new vectors from them. The best way for me to show you this is to change notation a bit. Suppose that a and b are vectors. Then I can define a new vector:
A special case occurs when a and b are collinear: they have the same direction. In that case, we can write:
In this special case, the only vectors we can generate are those that are also collinear with a.
On the other hand, if a and b are not collinear, they are said to be linearly independent. This opens up whole new vistas, because now c can be any vector, anywhere in the space defined as the x-y plane.
This concept leads us to another useful one. Suppose I want to define some primitive vectors that I can guarantee will be linearly independent? I can't do much better than to pick two vectors aligned with the two coordinate axes. While I'm at it, I might as well make them unit vectors, having length 1 (no units, please). In other words, let:
Yes, I know, using i as one of the symbols seems risky, because we also traditionally use i to denote . Hey, I didn't invent the tradition; I'm only reporting it. Sometimes, to emphasize that the vectors are unit vectors, you might see people put a "hat" over them, as in and . Personally, I try to avoid extra bric-a-brac in my equations, where I can. Your mileage may vary.
Given these two unit vectors, we can write the vector r as:
Before we leave the subject of maps and routes, take one last look at Figure 2. Did you notice how I sneaked in that angle, θ? That's the angle the vector r makes with the x-axis. As we've seen, It takes two numbers to specify the vector. But there's no law that says that the two have to be coordinates x and y on the Cartesian axes. I could just as easily use that "magnitude and direction" notion, and give you the scalar length of the vector and its orientation as given by θ. Engineers like brevity too, and often give the two polar numbers as .
Traditionally, the length of r is also denoted by r, but without the boldface. That is:
If you remember your trigonometry, you remember the conversion formulas:
Sharp-eyed mathematicians will note that the arctangent becomes indeterminate when x = y = 0. Fortunately, the folks who design programming languages were kind enough to give us the function atan2( ), which is the four-quadrant arctangent function. In any decent implementation, atan2(0,0) should return 0, which is as good an angle as any other.
Polar coordinates have their uses, especially if you're an airline pilot like Kareem Abdul-Jabbar, or a radar operator. In their vernacular, the numbers are called range and azimuth.
Even so, we still prefer to use Cartesian vectors when possible, to avoid those expensive trig functions. My advice: Use the polar coordinates only for input and output.
So far in this discussion, I've limited my examples to the two-dimensional Flatland of the x-y plane. But that was only so I could keep my figures on the page, instead of sticking out of it. In reality, of course, the universe has three spatial dimensions. When I give you the directions to my place, I could also give a floor number, or the altitude part of my GPS location. Everything we've done so far is easily extensible to three dimensions, including the addition of a new basis vector, k, aligned with the z-axis. Of course, in this case polar coordinates must now become spherical polar coordinates, and the conversion math gets quite a bit more complicated. See Figure 4.
Click on image to enlarge.
The conversion equations are:
As before, we should use atan2( ) to avoid problems when r = 0.
What good are they?
Early in this column, I promised to show you how and why vectors are so useful in the laws of physics. So far, however, I've talked mostly about their mathematical properties. Let me fix that now.
Newton's law of gravitation says that the gravitational force between two masses obeys an inverse-square law. In scalar arithmetic:
Where m1 and m2 are the masses, and r is the scalar distance between them. Throwing in the proportionality constant, we can write:
Here G is the universal gravitational constant. In SI units, it is:
(Yes, Virginia, it has units!)
But Newton's law doesn't just tell us the magnitude of the force; it tells us the direction. The force is always directed along the line between the two masses. But that line is simply the relative position vector we've been talking about. If one of those masses were you, and the other me, you'd be attracted to me (watch it!) by the vector force:
Politely following Newton's third law of motion, I'd be attracted to you by an equal and opposite force.
Note that, to get the vector force, we simply multiply its scalar value by the unit vector:
How cool is that?
Coulomb's law is another inverse-square law, giving the force between two charged particles. It is:
Where ε3 is called the permittivity of free space, and is given by:
The direction of the force is a little tricky because, unlike mass, an electric charge can be positive or negative. In parallel with our convention established in Figures 1 and 2, the force in Equation 24 is the force acting on you. We need the minus sign to get the direction right.
There are a lot more areas in physics where vectors simplify the problem, but for some of them, we need to talk about derivatives. I'm talking about the derivative with respect to time.
If you've ever driven a car, you know about derivatives. As you motor down the road, your odometer gives you the distance you've traveled since the car was built. Your trip odometer gives the distance since you last reset it.
You have another instrument on your dashboard, which shows the time derivative of the distance. We call that instrument a speedometer. Mathematically,
You have two controls on your floorboard. The name of one of them--the accelerator--gives you a hint as to its function. It controls acceleration, which is the time derivative of speed:
Both equations work beautifully for vector as well as scalar measurements:
I hate to give you yet another notation, but as I told you, we physicists are a lazy lot. Eventually we got tired of writing down all those fraction-looking thingies. So we devised yet another shorthand notation, which is to replace the time derivative with a simple (and sometimes nearly invisible) dot over the parameter. So:
What follows next should blow your mind. Newton's second law--the one on which all of the math of dynamics is based, becomes simply:
Lately, I've been working on trajectories to the Moon. Combining Newton's laws of gravitation and motion, I can write:
Where M is the mass of the Earth (or Moon, as the case may be). Even more simply, I can write:
Where µ = GM. You've just seen an equation containing four characters and some jots and tittles, that is the basis for all of Celestial Mechanics. Now, that's what I call applied math at its best.
I'll just mention, in passing, that the momentum of a body is:
The law of conservation of momentum is one of the most profound and inviolate laws of nature, second only to the law of conservation of energy.
At this point, you might think that we've pretty well exhausted the uses of vectors in physics. Silly you. We've hardly gotten started yet. So far, I've only told you about three operations: +, -, and the unary minus. There are more.
One of them is not the division operator. The expression makes no sense, and doesn't exist. But as if making up for this shortcoming, vector math gives us two--count 'em, two--product operators. I can show you their math derivations, but first I want to show you how they appear in the laws of physics, whether we want them or not.
There's a concept in physics called work. It has units of energy, and represents the amount of energy we might expend moving something around against a resisting force. Another concept is power, which is the rate at which work is done:
It's important, however, to recognize that, just as in life, you get no credit for doing work that is unproductive. If the thing you're pushing--or pulling--against doesn't move, you may feel tired, but you've done no work. The only work you get credit for is the work you did in the direction you were pulling. See Figure 5.
Click on image to enlarge.
Here I've shown a block sliding along on a horizontal surface. I'm tugging on a rope, with a force F, and the block is moving with velocity v. But remember, I don't get credit for all that force I'm exerting; just the component of the force that's parallel to v.
I hope you can see that the magnitude of that component is equal to F cos θ (note, no boldface on the F). Using this value, the power I'm applying to the job is equal to:
Now I'm going to assert that this equation is the very definition of the inner, or scalar, vector product. As the name implies, the result of the scalar product is … um … a scalar. For any two vectors a and b, the scalar, or dot product, is given by:
Using this definition, the power I'm generating in Figure 5 is simply:
It's important that you see the relationship between the dot product and the physical world. Other folks would show you how to calculate it, as I've done many times. But in this case, I've shown you the definition of the dot product first, to emphasize that the definition of the operator comes from the physics, and not the other way round.
The cross product
We have one more product to define, and again, its mathematical definition is going to seem crazy to you, until you see that it has to be that way to fit the physics. Take a look at the Figure 6.
Click on image to enlarge.