A vector means many things to many people. Pilot, mathematician, physicist: all of their definitions can help you do your job.
In my last column, I started a new topic, namely the development of a C++ function library for vector and matrix math. For me, it's a bit like going home; the very first code I ever wrote in a high-order language (Fortran) was a subroutine to add two vectors.
We'll continue in that vein in a moment. But first, a bit of explanation is in order.
Back in college, I had some professors that didn't explain much. They'd just start writing down formulas on the blackboard (remember blackboards?) and derived other formulas from them. I could usually follow the individual steps in their development. At the end, I'd be able to see the result and even parrot the derivation on demand. But sometimes I didn't quite “get it,” because during the entire process, a good percentage of my brain was busy asking the other part, “Why is he telling me this?”
By contrast, other professors would explain the point up front. One prof used massive overkill in his lectures. Every day, he'd begin by reminding us what we'd done the previous day and explaining what he planned to do that day. At the end of the lecture, he'd recap what he'd done that day and tell us what he planned to do on the next. Most of my fellow students groaned through all this. I loved it. For once, I could use the entire brain (well, 10% of it, anyway) in watching the development unfold.
Judging from some of the e-mails I've received since my last column, I think I owe you more explanation. It's not enough just to show you what I'm doing: I need to spend a little time explaining why we're doing what we're doing (motivation) and why we're doing it the way we're doing it (rationale).
The motivation part is easy enough: we need to be able to manipulate vectors and matrices because the universe is three-dimensional. In this universe, it takes three numbers to describe the position of some object of interest. Even if, for example, we're navigating on a two-dimensional surface–city streets, for example, or the ocean–it still takes three numbers; one of them just happens to be unchanging.
For about the first 6,000 years of science or so, people got along without vectors. Oh, they still needed them, alright, and they used them. They just treated the three numbers as separate quantities, each described by its own equation.
In the 17th century, largely inspired by work based on Newton's laws of gravitation and motion, some genius (not Newton) said, “Hey, you know what? I keep doing these calculations three times each.” And he sought to find a way to do them only once. Thus the use of vectors was born.
I hasten to add that there were two groups of people studying vectors. The first were the pure mathematicians, who saw them as a new kind of number, sort of like a generalization of the complex number. The contributions of the great mathematician and physicist, William Rowan Hamilton, are notable. To these folks, vectors didn't have or need any counterpart in the real world; they were simply things to build an algebra around.
Physicists saw things differently. They recognized the three-dimensional universe thing, and used vectors to describe real, physical quantities like position, velocity, momentum, etc. They used the algebra of the mathematicians, alright, but they only used the parts that related to the real world. And, as you'll see in a moment, they defined some new operations that the mathematicians hadn't thought of. To these physicists, vectors weren't merely a pleasant intellectual exercise. They were useful because they made our computations easier. Their interest in vectors increased tremendously with the discovery of electric and magnetic fields. It's hard to even imagine Maxwell's equations without vectors.
It only took about another 200 years for the engineers to catch on to the idea.
It's taken even longer for some software folk. On the one hand, we have a host of wonderful computer tools that let us do vector math–the names Matlab and Mathcad come to mind. And they have, indeed, helped us to solve problems that we otherwise might consider too hard to attempt. On the other, I can't tell you how many times I've seen, in code, constructs like:
Or, even worse:
I still see them, to this very day.
As for the rationale, I just gave part of it. My purpose in developing a vector class is to make such horrors go away, hopefully forever. Using vector notation greatly simplifies the math, and doing things once instead of three times each, not only shortens the code, but reduces the chance of error. Still, my approach involves a lot of design decisions, and I need to explain them as we go.
The first decision has to do with efficiency. Ask any real-time programmer, and he'll tell you that the reason he uses code like the constructs I just mentioned (I'll call them Constructs 1 and 2) is to save clock cycles. Now, normally I put efficiency way down on my wish list. Other factors, such as readability, maintainability, and ease of use, rate way higher. I tend to make an exception for vector/matrix math, simply because the operations are so primitive, and therefore tend to get used way down inside nested loops. Even so, I'll opt for a function call, rather than writing Construct 1 in line. Some may argue that a function call takes too much overhead. They might say, “Hey, I'm using a PIC processor. I really, truly, can't afford the overhead of a function call.” To that I reply, as I did last time, if that's really the case, you have problems that far exceed the overhead of that function call. You need a faster processor.
Furthermore, if your CPU is too slow for a function call, it probably also has limited memory. Using a function call uses less memory–ask any Forth programmer. Surely that's worth something.
Look, I've been having this efficiency argument over and over, for the last 40+ years. Whole new generations of programmers have come and gone, giving the same old efficiency argument. I didn't buy it in 1960, and I sure don't buy it now. But, as a reader pointed out, there are some real-time applications–processing images, for example–where the processor really can't afford even the luxury of a function call. In that case, use something else. Use inline functions or create a C macro. If that fails, use the inline code, with my blessing. Just be sure you've exhausted all other options, first.
One last thought: you're wasting your time using Construct 2 instead of 1. Some folks think that, by eschewing indices entirely, they can get better code. It's not true. The compiler is smart enough to tell the difference between a variable as an index and a literal constant. The compiler will change the latter into a direct address, requiring no run-time penalty.
That's the rationale for using vector arithmetic. If you're writing real-time software for a cell phone, perhaps you don't need it. At the other extreme, if you're programming an aircraft navigation system–or any other system that deals with real, physical vectors–you can't (and shouldn't) do without it.
There still remains the rationale for doing the vector math package the way I do it. Many of your e-mails had to do with template metaprogramming in general, and expression templates like Blitz++, in particular. John Phillips of Capital University in Columbus, Ohio wrote:
“I was just looking at your article . . . After introducing the abstraction you mention using operator overloading to achieve more natural notation, and you lament that this is not efficient. I'm wondering why you left out any mention of expression templates as a solution to the possible efficiency problems. Even first generation systems, such as Blitz++ or the MTL showed a combination of natural notation and efficiency close to (and occasionally even exceeding) hand-coded Fortran or C.”
Nigel Redmon agrees. “Another useful method to maintain high-level functionality while minimizing overhead is template metaprogramming,” Redmon writes. He references (“Will C++ be faster than Fortran?” a paper by Todd L. Veldhuizen and M. Ed Jernigan).
Bill Emshoff writes, “Jack says that with C++, he can 'have efficiency or simplicity but not both.' Not true, as long as the simplicity desired is in the user's code rather than the implementation of efficient arithmetic operator overloads. This magic can be accomplished once one realizes that the addition in the expression C = A + B need not really be evaluated until the assignment to C is made, and that 'template metaprogramming' can be put to use to build a compile-time recursive-descent parser. . . . I'd recommend reading Todd Veldhuizen's papers on the subject at the Blitz++ home page at www.oonumerics.org/blitz.”
You can also explore more about Blitz++ libraries on these sites or Google it for yourself:
The upshot is that lots of people are still writing scientific software in Fortran, rather than Pascal, C, or C++. The reason is mainly that Fortran programs still outperform the others when it comes to raw, number-crunching performance. A lot of smart people have been working very hard to fix this problem, and they've settled on template libraries like Blitz++ as the solution.
If you're considering a new program that's heavy on scientific computations, and performance is a big issue for you, Blitz++ definitely deserves a look. You may well find that it works for you.
It doesn't for me, and here's why.
Those Fortran programs that the Blitz++ people seek to replace are not small programs. They tend to be big, complicated programs involving compute-intensive algorithms, which are used daily for production purposes. I'm thinking of finite-differences solvers for things like stress analysis, heat-flow analysis, and aerodynamics; for such long-term production use, clock cycles rule.
But I never (well, almost never) write software like that. In my day job, I tend write two kinds of software. First are the analysis programs that I write to solve specific problems. This kind of software is not going to be used forever. I'm not going to run such a program 100 times a day for the next 10 years. I'm going to run it perhaps 50 times, total, get my answers, and walk away. For those cases, the only efficiency that matters relates to the time I spend from problem concept to final solution. Can we say, “ease of use”? Can we say, “reusable software”?
The other kind of software I write is real-time stuff to go into embedded systems (which, after all, should be our focus in these pages). Certainly, in an embedded system, efficiency is an issue, but not to the extent that it takes precedence over all else. First and foremost, embedded software should be correct . And not just correct, but demonstrably, verifiably correct. Efficiency is an issue, but only to the extent that a given task gets finished before it must run again. Execution time is more like a gate to get through, rather than something needing continuous improvement.
NASA, DoD, FAA, FDA, and many other agencies and organizations have very strict rules about how software like this gets written and tested. Mostly, the rules involve making certain that no scenario, no sequence of events, can possible cause the memory to be filled, the stack to be blown, or any such calamity.
I'm having trouble imagining that such customers will accept code generated by a template processing engine. It's not impossible. Many agencies have already approved certain code-generating tools, such as Simulink and MatrixX. Enough tests have been run on these code generators to convince even the most skeptical, that the program is going to do what the model going into the code generator said.
But I'm guessing that it will be a long time before a template library will be given similar approval. I could be wrong. If anyone knows of a case where a template library has been approved for building flight software, let me know. To my knowledge, that approval still lies in the future.
There's one other kind of software I write, and that's the examples I use in this column. It's true that this vector math library is intended to be for more practical, routine use than most of my examples here, but always, my main purpose is to educate. I don't just want to give you the world's most efficient solution; I want you to understand it, and be able to read it and modify it yourself, as the needs arise. Somehow, I don't think turning the template metaprocessor into a recursive-descent parser is going to help you do that.
Over recent years, I've noticed that the practitioners of C++ seem to be bifurcating into two groups. There are the folks who like to write object-oriented code in the class-object style, making heavy use of the constructs built into C++. Then there are the folks who, perhaps inspired by the popularity of the Standard Template Library (STL), like to work with template metaprogramming. I can't say that one group is right and the other wrong–each has their points. But have you ever tried to read the STL? To me, it's all Greek. It may very well be true that template metaprogramming is the answer to the world's problems, and more power to the folks who do it. As for me, I'd sooner write an Ada compiler in Microsoft Basic.
I definitely will not be giving you solutions in these pages that involves building new templates. No way.
We have one last bit of rationale to cover, and that relates to my choices as to how best to do things. I had hoped to give you my reasoning as I went along, but I can see now that I should explain where we're going, at least for the near future. Here are some of my key decisions:
- My ultimate goal is to be able to do algebra with vectors and matrices, in much the same way that I'd write equivalent math equations.
- For computation-intensive uses, I also want efficient low-level functions. That means (at least) two sets of functions, one low-level, one a higher-level, C++ class.
- The use of function calls, or at least the illusion of them, is non-negotiable. If you can persuade the compiler, using either inline functions or macros, to give that illusion, more power to you. For 90% of my work, which includes embedded software, functions rule.
- The use of C++ is also non-negotiable. For my column, I have to pick one language to describe algorithms, and C++ is my choice. However, I also recognize that some readers want or need to do things in C. That's why I've tried to write the low-level functions in a sort of Fortranish, so they could be easily translated to C (or, for that matter, back to Fortran).
- I'm implementing the vectors as contiguous arrays of doubles. Frankly, I can't think of another way, although I'm sure someone else can. I'll also be using the obvious choice of indexing in C/C++, which is an index that counts from zero. In Fortran, one-based indexing is the norm, and using zero-based indexing may be awkward, at first. I've even seen implementations that deliberately define each vector one element longer than needed, and ignore the first element, specifically so the Fortran folks won't have to rewrite their algorithms. I flat refuse to do that. In the C++ class, however, I do supply both ( ) and [ ] operators, specifically to handle the math algorithms that depend on the indexing method.
There's one other decision that I haven't discussed before. That's whether or not to provide separate software for the case of three-dimensional vectors. I've said that almost all vectors that represent real, physical quantities, have dimension three. But there are times when we want to do math-oriented operations with vectors of other dimensions. The obvious solution is to pass the size parameter into the function, so it can do the same operation for n-vectors. See the first function in Listing 1.
As you can see, the functions are identical except for the passed parameter, n . There is no question that we can use vadd( ) for 3-vectors–simply pass it a three. But after a time, that gets bothersome. What's more, every passed parameter increases the probability that the wrong thing gets passed.
There's also an efficiency issue (that word again). When the loop limit is a constant, the compiler can optimize, using loop unrolling, and completely eliminate the for-loop. The end result looks like the code in Construct 1. If the loop limit is a passed-in parameter, the compiler can't do this optimization.
Does this really matter? If we think other attributes are more important than raw clock cycles, do we really care whether the loop is unrolled or not? Perhaps not. I change my mind on this decision, at least once a year. But look again at the listing. It's not that these functions are so big, we can't afford the ROM space to have two of them. So my decision, which I hope to follow in the rest of this series, is to give you both sets of functions. You can use the 3-vector versions or ignore them. Your choice.
Now let's get back to vectors.
What's in a name?
Many scientific terms–like force, power, and energy–also have counterparts in everyday speech. Sometimes this leads to confusion when folks don't understand that in science the words have very precise meanings.
It's the same way with vectors. The term means different things to different people. Perhaps you remember the like in the spoof movie, Airplane , “We've got your vector, Victor.”
To a pilot, the term “vector” usually mean simply a compass heading. If it also includes the range, it looks a lot like the mathematical definition, with one important difference: in the world of aviation, a vector originating in Dubuque is different from a vector originating in Newark. By contrast, in math and physics, a vector encodes something with magnitude and direction, but not its point of origin. Move it around without rotating it, it's still the same vector.
To most computer scientists, a vector is sometimes simply an address or an address offset. It's part of a command to jump to some new address, as in the term, “jump vector.” Other times, it's a whole array of such addresses, in which case it's more correctly called a jump table .
To a programmer, the term “vector” is often synonymous with “array.” It's simply a contiguous set of numbers, all of the same type, and addressable by index. What you use them for is up to you.
To a mathematician, a vector is something that behaves in a certain way under transformations. To a physicist, a vector represents some real, physical quantity like force, velocity, or magnetic field.
The distinction is important. Nowhere is it more important than in the C++ STL class called vector . No doubt, some people have found very effective uses for objects of class vector , but doing math is not one of them. You will not find any use of the STL class in this series. Ditto for the other oft-recommended class, valarray . Aside from my decision to eschew templates, the templates I just mentioned simply don't do the things we need done. The kinds of vectors I'm talking about here are the kind that represent those physical things that are best represented by vectors.
A vector tornado?
As I mentioned earlier, physicists have been using vector notation, and the vector algebra that goes with it, for centuries. For some universities, however, it took the engineering department a little longer. When I began college, I chose mechanical engineering as my major (I wanted to build fast motorcycles). At that time, the physics department was already using vectors heavily. But the engineering professors still preferred to express all their math in the equivalent form of three scalar equations, each.
In my junior year, I got some advice that said I should consider transferring to a different university; one with a little classier reputation. My father was so impressed that, in a moment of weakness, he offered to send me to any university of my choosing. I chose Cal Tech.
Now, Cal Tech discouraged transfer hopefuls like me. Like a fine football team, they preferred to get their students as freshman, so they could indoctrinate them in the Cal Tech way of doing things. But I insisted they give me a chance, and they did. But they required all of us transfer students to take entrance exams.
I did well in those examples until I got to the math section. All their problems had to do with vectors. At that point, all I knew was the old chestnut, “A vector is something that has both magnitude and direction, like the wind.”
Then I came across the problem that said, “Evaluate the cross product of the following vectors.”
I was stunned. You can multiply vectors? What the heck do you get when you multiply two winds? A tornado? I fell flat on my face and didn't make the cut. I returned to Auburn much chastened and humbled, with a burning resolve to learn everything I could about vectors. And I did.
Actually, that nave notion that the product of two vectors is a tornado, is not that far from true. The vector cross product is ultimately related to rotation.
There is no division operator in vector math. As though to make up for the loss, there are two multiplication operations:
- The scalar, inner, or dot product
- The vector, or cross product
The first is straightforward enough: you multiply the scalar elements of the two vectors, element by element, then you add the results together. The result is a single number, a scalar.
The cross product is a little strange, and if you've not seen one before, it may even seem completely arbitrary. Trust me, it's not. It's important for you to know that neither product was generated by the mathematicians as an intellectual exercise. Both products really do appear in the real, physical world. We define the products the way we do because the physical quantities behave that way.
Cross products only work with 3-vectors. Although it's possible to define a cross product for 4-vectors (and actually find it useful), most implementations refuse to consider any vector dimension except 3.
Figure 1 shows the motivation for the dot product. In physics, when you exert force on an object, and it moves some distance, you are doing work, in the rigorous physics use of the term. If you're pushing on a brick wall, it may feel like you're doing work; you may get hot and sweaty and tired, but you haven't done work in the rigorous sense. No motion, no work.
What's more, you only get credit for the motion that goes in the direction you're pushing. It's only the component of motion parallel to the force, that counts. From geometric considerations, it's easy to show that the work is:
where θ is the angle between the two vectors. If we define the dot product to be:
Then we can write the vector equation:
In physics, all operations in which the product of two vectors gives a scalar result, are dot products. In general, we'd rather not have to find the angle between vectors to get the answer. Fortunately for us, and even more fortunately for computer implementations, we don't have to. Suppose we define two vectors as Cartesian vectors, meaning three components along coordinate axes that are mutually perpendicular. Let:
And similarly for b . In that case, the dot product is simply:
Easy, huh? The computer implementation is almost a one-liner. See Listing 2.
You will note that there is absolutely no error checking in this code. We must assume that the programmer knows what he's doing and won't use the library improperly. I realize that this is not a very robust approach. But it's certainly efficient, and it's very much in the good old Fortran tradition.
The other product shown in Figure 2 is the cross product.
As I mentioned, it might seem bizarre and arbitrary, but it's based on physical laws that really define the operation. Imagine that you're loosening a bolt using a wrench. When you pull on the wrench, you exert a force, and the force acting over the lever arm of the wrench produces a torque. But, similarly to the dot product, the only component of the force that matters is the one perpendicular to the wrench handle. You can pull in other directions all you like, but it won't contribute to the torque. If you were using a torque wrench, the readout would only read the value associated with that perpendicular component. It's relationships like this that define the way a cross product works. Notationally, we write:
where L is the torque. Its magnitude is given by the definition:
Its direction is something very weird and wonderful. The vector of the cross product is perpendicular to both input vectors. Now, there are two possible directions that fit this rule. The right one is . . . um . . . the right one. That is, it's the one defined by the right-hand rule. Here's how it works. Imagine that you're rotating vector r into vector F . That is, place your right hand in such a way that, if you rotated r in the direction your fingers point, it would be parallel to F . Now stick out your thumb. It points in the direction of r × F .
Is that weird, or what? Yet the rule fits what happens in the real world. In the case of the wrench, the torque vector points along the direction the bolt will loosen.
Torque, angular momentum, and many other physical parameters are cross products. The use of cross products got a whole lot more interesting when we discovered that magnetic and electrical forces obey cross-product rules. When an electron moves through a magnetic field, the force exerted is given by:
Where q is the electric charge, v the velocity, and B the magnetic field strength. So, you see, we didn't just make up the rules about cross products to make life harder for you. The universe really, truly works this way.
Is there a method for computing the cross product, that doesn't involve q ? Yes, in fact there is. I won't go into the derivation, because it involves a determinant. But the algorithm for c = a × b is:
The code follows directly from this. It's shown in Listing 3. Remember, this definition only works for 3-vectors. For convenience, you can implement a version for n -vectors where n = 3. Use an assertion to make sure that the function is never called otherwise. My advice, however, is to only implement the 3-vector function, and that's what I've done.
With vector add, subtract, and dot and cross product, we've pretty much exhausted the operation one can do. There are lots of incidental things like finding the magnitude (norm) of a vector, inputting and outputting them from the colsole, copying them, stuff like that. But the math part is done. Next time, I'll show you the complete listing, and we'll also talk about practical uses of the library.
See you then.
Jack Crenshaw is a senior software engineer at General Dynamics and the author of Math Toolkit for Real-Time Programming. He holds a PhD in physics from Auburn University. E-mail him at email@example.com. For more information about Jack click here.