# Random thoughts

This is going to be a different kind of column for me. As I was planning it, I considered several possible topics. I couldn't decide which one to cover, so I decided to cover them all. You may find some of them incomplete–I'm giving you my thoughts as I'm having them. I welcome your thoughts and comments.

**Vector/Matrix class redux**

Whenever I set out to (re)create a vector/matrix math package, I always wrestle with the decision: should I include separate functions for the special case where the vectors are dimensioned 3, and the matrices, 3Ã—3? Clearly, classes that can handle general, n-dimensional vectors and m Ã— n matrices can also handle the case m = n = 3. On the other hand, it's certain that if we know in advance the dimensions of the objects, the code will be a little faster and more compact.

While it's important to be able to deal with general forms of the mathematical entities called vectors and matrices, there's a good argument for the specialized functions as well.

We live, after all, in a three-dimensional universe (unless your field is String Theory). Most of the computations I do–space mechanics, rocket and aircraft dynamics, robotics, dynamic simulations–involve the motions of real, physical bodies in this three-dimensional space. It's the discipline called dynamics, whose math is defined by Newton's laws of motion.

The mathematical entities called vectors and matrices weren't exactly invented specifically to deal with physics problems, but they might as well have been. The simplifications that result from their use in physical problems is so profound, it's hard to over-emphasize the point.

That being the case, I'm always tempted to define special computer functions to process 3-d vectors and matrices. Usually I succumb to the temptation. When I was developing the C++ classes called **Vector** and **Matrix** , I wrestled with the same decision. For the sake of brevity, I chose to omit the special 3-d case. For the column, I was focused more on showing you the principles rather than defining a production-quality library of functions.

However, the argument for the special case is even stronger than usual, when defining C++ classes. That's because my general-case solutions required dynamic memory allocation.

Any time we declare a new object of a certain class, whether it be a named variable or an anonymous temporary, the compiler is going to have to invoke the memory manager. That's a given. But in my general-purpose functions, I also had to dynamically allocate the storage for the object's member data, because we don't know the size of the data until the constructor is invoked. So with every declaration of one of the general-purpose objects, we get a double dose of memory allocation.

If you declare all named objects statically, you can move most of the memory allocations to the initialization phase of the programs. We might do this, for example, when writing flight software for a satellite or missile.

But if you make use of those lovely overloaded operators, you can't avoid the creation of temporary objects, and the overhead to construct and destruct them.

Finally, there's the issue of the vector cross product, which only works for 3-vectors. In our n-vector class, I had to include tests to make sure that both vectors were, in fact, dimensioned 3. That's not a problem in the special case. (Optional for extra credit: some of us gurus in the know, know that the cross product isn't really limited to 3-vectors. There's a 4-d cross product as well. If you're “one of us,” drop me a line. Say the magic word, “quaternion,” and I'll send you the Secret Decoder Ring Tone.)

If we know in advance that all the vectors will be dimensioned 3, and matrices, 3Ã—3, the double dose of dynamic allocation goes away.

The situation with respect to matrices is even nicer. If you followed the creation of the **Matrix** class, you'll recall that we struggled with the issue of allocating matrices with arbitrary dimensions. Because C/C++ doesn't support conformant arrays, we can't just declare the array data in the form:

** double x[m][n];**

Instead, we had to resort to defining a single-dimensioned array and generating our own code to index into it. This isn't a problem for the 3Ã—3 case. We simply declare the array:

** double x[3][3];**

and let the compiler use its own indexing, presumably optimized out the gazoo.

Recently, I had occasion to do a lot of computations in the 3-d universe. So I succumbed to the temptation and wrote a new C++ class called **ThreeVec** . As the name implies, the class is specialized to the case of 3-d vectors. I'm going to show it to you, but in a rather unconventional way. Instead of walking you through all the design decisions yet again one more time, I'm simply going to post the files on Embedded.com. I think you'll find that the differences between the n-vector and 3-vector cases are straightforward enough so that you won't need me leading you by the hand. Please let me know if you need more help.

**Do something …**

For the same project, I also need a class for 3Ã—3 matrices. I'll tell you about it soon, but first I have a confession to make. I have a character flaw (No! Really?).

I'm basically an optimizing sort of fellow. Before I do anything, I tend to look at the options, weigh them carefully, and choose the one that's, in some sense, optimal. I'm the guy who tries to guess which lane I should be in at a traffic light. There's no malice involved, no jamming my way in front of someone else. But given a choice, I'll choose the option that's most likely to get me where I'm going sooner with fewer hassles.

This tendency to optimize can be both a blessing and a curse. A blessing in that the code I generate is usually pretty good. A curse because it takes me longer to produce it. Because I'm looking for the best solution, I can never keep up with the software wizard who simply codes the first thought that comes to him.

In another lifetime, I built and raced midget cars. My kids also raced, and I was their chief mechanic. Naturally, optimizing the route to the front is a good habit for a driver, and I was pretty good at it. But optimization is also good when deciding how best to tune or modify the car. When it came time to make a modification, I'd find myself spending considerable time trying to optimize the design. The design optimization process goes something like this:

** while(1){ if(option(a) < option(b)) return a; if(option(b) < option(a)) return b; }**

The problem, of course, is that there's no predicate for equality. If I can't find a clear reason to choose one option over the other, I'm paralyzed into inaction, stuck in an infinite loop.

This sometimes happened when I was planning a new modification. I'd sit in the garage, holding the parts I could use, and literally weighing my options. I'd look at the parts in one hand, and see advantages in using them, but also disadvantages. I'd look at the other and see a different sets of pros and cons. I could find no clear winner.

Fortunately, my decision-making system had a priority interrupt. Not hearing any work being done, my wife would open the door behind me. She'd shout, *“Do something, even if it's wrong!!!”* and slam the door again. This interrupt was usually enough to jog me out of the loop.

**A 3Ã—3 matrix class?**

It's this character flaw that keeps me from being able to show you a completed 3Ã—3 matrix class. Until recently, I've been stuck in one of those while-loops, and this time the ex-wife isn't here to assert the interrupt. I wasn't able to exit the loop until I began writing this column. I'd like to tell you what I have, and see what you think.

Clearly, a 3Ã—3 matrix is not just any old matrix, even any old matrix with one dimension of three. It's a *square* matrix, and square matrices have special properties. For one thing, they have well-defined diagonals. A square matrix might, in fact, *be* diagonal. That is, its off-diagonal elements may all be zero. That's important, because a diagonal matrix has a trivial inverse:

(1)

A square matrix might also be symmetric. If so, it can easily be transformed to a diagonal matrix. When I'm doing arithmetic with 3Ã—3 matrices, they're almost always associated with coordinate rotations. For example, to convert a vector from a rotating coordinate system to a fixed one, I might write:

(2)

The matrix T is even more specialized than most. It's a *rotation matrix* , also called an *orthonormal* *matrix* . It has two very important attributes: Its determinant is 1, and its inverse is the same as its transpose:

(3)

This is very important, because a transpose of a 3Ã—3 matrix is trivial to generate, whereas an inverse takes a lot longer. Also, because the determinant is known, we don't have to worry about the matrix being singular.

Finally, there exist more than one way to represent a rotation. The matrix is one choice, but not the only one, or even the best one. Other choices include Euler angles and quaternions. Euler angles are useful because people tend to be able to visualize the rotations better. But they're terrible choices for computations. Quaternions are the best and most compact, but good luck trying to visualize them.

Side Comment: You know you've arrived as a guru when you realize you can visualize a quaternion. Since it's a four-dimensional vector, that takes a little practice.

**Decisions, decisions**

Now that I've given you the background, I can tell you why I was stuck in the while-loop. I know that I want to use 3Ã—3 matrices mostly to effect coordinate rotations. But is that *always* going to be the case? Maybe not. And if it is, may I include constructors to convert a set of 3-vectors to a matrix? Or a set of Euler angles, or quaternions? If I demand that the matrix must be orthonormal, I'd better not try to create one from vectors.

For that matter, if I'm dealing exclusively with rotation matrices, maybe I need a function to make sure they *stay* orthonormal.

Here's why. When we're writing dynamic simulations involving rotations, we have to numerically integrate the elements that make up the state vector. If one of those elements is a rotation matrix (or equivalent structure), numerical roundoff errors can cause the matrix to drift away from its nominally orthonormal state. Those of us who write such simulations often include functions to re-normalize the matrix. It's not an easy thing to do, and doing it in an optimal way is even harder.

So a re-normalizing function is going to be almost a necessity for rotation matrices, but it's going to look really, really strange in a general-purpose matrix package–even one specialized to 3Ã—3 matrices.

Of course, you know the conceptual solution to problems such as this: inheritance. Write a C++ base class for 3Ã—3 matrices. Then define a derived class that specializes the class even further, to rotation matrices. Most of the operations will be the same, but a few functions, such as re-normalization, can be added, and the inverse function can be changed to invoke the simpler transpose function.

There's only one problem with this approach. A rotation in 3-space is a very unique thing, and the function that it performs is vastly different from the more generic matrix product. Heck, the member data for a “rotation matrix” need not even include a matrix. Rotations can also be described by a quaternion or a set of Euler angles.

And because there are three possible ways to describe the rotation, we're also going to need conversion functions to transform one set of descriptors to another.

FYI, a couple of years ago I wrote a library of Matlab functions capable of converting between angle, matrix, and quaternion representations. In doing so, I even surprised myself, coming up with killer algorithms that resulted in code that was tight, accurate, and bulletproof. I'll be sharing the algorithms with you soon.

Most people who work every day with rotation-related problems choose the quaternion, because it's more efficient to do so (no angles involved, and fewer elements to store). One has to ask: what's the point having a rotation matrix be a derived class of the 3Ã—3 matrix, if it doesn't even hold a matrix anywhere in its structure?

Off and on, I've wrestled with questions like this many times. I can't say that I've ever come up with the definitive solution. More often, I find myself in need of that priority interrupt.

But this time, I've finally got the structure I need. The trick is, there should be *no* class called **RotationMatrix** . Instead, there's a class called **Rotation** . When applied to a vector or another matrix (or **Rotation** ), the operation may look like a matrix product, but the appearance is only superficial. Internally, a **Rotation** should be a separate type of object, worthy of its own class. Heck, it may not–and probably won't–even have a matrix as part of its member data. And in the best tradition of object-oriented programming, the way the rotation class is implemented should be transparent to the user. If I decide to change the internal representation from matrix to quaternion, that decision shouldn't matter one iota to users of the class.

When you think about it, the fact that I can define a rotation operation that looks like a matrix product is no different than defining products of integers or real numbers. I can declare:

** int i, j, k; double x, y, z; Vector X, Y; Matrix A; Rotation R;**

and write:

** k = i*j; z = x*y; Y = A*X; Y = R*X;**

Yes, the statements look the same and mean much the same. That's the elegance of operator overloading, and it makes life incredibly easy for the programmer. But we have no particular reason for a rotation object to inherit the '*' operator from a matrix, any more than we might have it do so from class **int** . Separating the class **Rotation** into a full-blown separate class is beginning to make a *lot* of sense.

**Wrapping up**

I'm glad we had this little talk. So often in life, I've found that as I try to explain a knotty problem to someone else, I realize that I suddenly know the solution. That seems to be what has happened here. Thanks for listening.

* Jack Crenshaw is a systems engineer and the author of * Math Toolkit for Real-Time Programming

*. He holds a PhD in physics from Auburn University and is a veteran of NASA's Apollo program, where he worked on the figure-8 trajectory. E-mail him at jcrens@earthlink.net. For more information about Jack click here*