Completing the vector class -

Completing the vector class


Jack adds more functions to the vector class and either admits high treason or finally sees the light, depending on your point of view.

Since we met last, I've added quite a number of member functions to the vector class–more than I can tell you about in this month's column. Instead, I'll just hit the high spots, talk a little philosophy, and post the full listing at

For those just joining us, we've been developing a C++ class suitable for doing math with vectors. The operative words are “vector” and “math.” We're talking about vectors in the math/physics sense, not just abstract arrays of numbers.

To IDE or not to IDE?
First, I want to discuss a more philosophical manner. I may cause some controversy here, but I can't keep silent any longer.

There seem to be two kinds of programmers in the world: those who like to use an integrated development environment (IDE) and those who don't. IDE lovers like to think that their productivity is improved by using one. Many IDE shunners–talented people whose work I admire–claim that they have the edge on productivity. I'm sure they have their reasons, but there's also a certain macho mystique associated with doing things the old-fashioned way.

In the “good old days” when men didn't eat quiche, there were no IDEs. There was only one way to develop code: write the source code using some standalone editor, then submit it to a batch compiler system. In the very old, good old days, the “editor” was me, sitting in front of an IBM 026 card punch. The “librarian” was also me, selecting object-file card decks from my desk drawer and stacking them in order.

Even then, the macho mystique was firmly on scene. Some prided themselves on their ability to get things done without the aid of “crutches” like compilers. One fellow told me, in all seriousness, “Assembly language is for wimps. Real programmers code in absolute binary.” That's a verbatim quote, except for the “wimp” term. He would have used the 1960 equivalent, which I've forgotten.

Being basically lazy, I'm very much an IDE lover. My first exposure came with the original Borland Turbo Pascal. From inside the screen editor, I could compile and run the program with a single keypress. A compile error would bop me back to the editor, with the cursor sitting there blinking at the point of the error. I embraced the IDE concept with vigor and have never looked back.

I know excellent software people who still shun the use of an IDE. This is especially true in the case of embedded systems programming, where IDEs are relatively new. But then, I also know Unix/Linux wizards who are still using the editor, vi, convinced that they're on the bleeding edge of technology.

If you prefer non-IDE, command-line tools, you're probably very familiar with the Unix-heritage makefile system, which lets you explain the relationships between all the files in a project. I've had to write makefiles. I didn't like it.

An IDE manages the project for you, building its own makefile behind the curtain. In the end, the question is, do you want to write your own makefile or let the IDE do it for you. For me at least, it's not a hard decision.

Most embedded systems components–hardware, chips, operating systems, and so forth–need some supporting software. Hardware and software vendors supply any support software that's needed. A few years ago, at an Embedded Systems Conference, I noted that virtually all the vendors provided C-based development systems–almost always using the GNU toolset. Only a couple of years later, it seemed that everyone had an IDE, usually based on Microsoft's Visual Studio. To me, the difference was both dramatic, and noteworthy. It's my strong belief that the IDE leads to much higher productivity, especially in the embedded systems arena. I'm glad to see the trend in that direction.

At my day job, I use an IDE–Microsoft's venerable Visual C++ 6.0. Until recently, I've been using Borland's C++ Builder to develop software for this column. Recently, I ran into problems with it–problems that made the job harder than it needed to be. Installing Microsoft 6.0 here at home didn't seem to help much. It was time for an upgrade.

As most of you know by now, both Borland and Microsoft currently offer free versions of their compilers. Borland began the trend by offering only their command-line compiler free; you still had to pay for the IDE. More recently, they've made available the IDEs as well. Microsoft followed suit.

Microsoft's offerings are all based on Visual Studio and are fully functional IDEs. I downloaded the products for C++, C#, and a few others.

What I'm going to tell you next may surprise you. Old-time readers know that I'm no fan of Microsoft. So when I tell you that this new, Visual Studio-based IDE is the best I've ever used, bar none, you can take that to the bank. It has every feature I've ever wanted in an IDE, plus many I hadn't thought of yet.

I just thought you should know.

Now I must make a correction and confession. In my last column, I showed you a number of constructors for class Vector and pointed out that you can have as many as you like, with any number of arguments, as long as the compiler can sort out which one you intended to invoke. That's true enough. Then I said the assignment statement is sort of parallel to the constructor. A constructor creates new objects of the class; an assignment statement copies data to already-existing objects. Also true. But then I took the parallel one step too far, showing you both constructors and assignment statements with multiple arguments.

That, of course, is rubbish. An assignment statement expects only a single argument on the right of the “=” sign, and will complain mightily, otherwise. You can write as many functions as you like with names like CopyTo or AssignTo, but operator = expects one and only one parameter. I'd have retracted that statement if I'd thought about it for more than a nanosecond. I guess I didn't. Sorry.

Back to business
Now, back to the vector class. I've made a lot of changes and additions since you saw it last. Two changes follow from the change in development environment. Most are additional member functions to flesh out the class. And one is profound, going right to the heart of the structure of the class. Being something of a sadist, I'm saving that one as a cliffhanger for the next column.

The first change is trivial but cool. As you know, in C/C++, if header files are nested (one includes another), you can get multiple-define errors. The standard cure is the include-guard mechanism:

#ifndef FOOBAR  #define FOOBAR  #endif   

It's a kludge, but it works. After the first pass, the switch FOOBAR is defined, so the body of the file is skipped. Of course, you risk problems if your choice of switch names isn't unique. But even on its best day, the include-guard mechanism is still a kludge.

The new Microsoft compiler supports the preprocessor directive:

#pragma once   

which is oh, so much cleaner.

The art of unit testing
The second change involves the main program, which is also the test driver. I hope I don't have to tell you that every C++ function should be unit tested. You write a test driver that calls the Unit Under Test (UUT) with various sets of input parameters and reports the output.

My friend and colleague, Jim Adams, has this to say about unit testing:

    When it comes to software, never ask a question unless you already know the answer. If you don't know the answer, it's no test.

How do you know the answer? In olden times, we'd perform hand checks. Because turnaround time was so slow, we had plenty of time to do them.

Let's be clear: hand checks are not fun. I absolutely hate to do them. Others seem to hate them even more. But they're absolutely essential, just the same.

In today's high-speed world, it seems that some developers feel they can skip the hand-check part. They're wrong.

If there's anything more foolish than not doing a hand check, it's recalculating the same one–or a similar one–over and over. When we get one that works, we should find a way to squirrel it away for future reuse.

With the advent of modern, interactive development environments, it's tempting to just write a test driver that lets you give the UUT input values from the console and report the results there. I've done that many times. So have you. And that's OK, if you can tell at a glance that those results are correct. Otherwise, you've violated the Adams Principle: every question deserves a known answer. To do it right, you must build the test values–as many tests as you think you need–into the test driver. Sometimes, the old-fashioned way is still the best.

What's needed is a way to compare the output of a UUT with its expected outputs. C/C++ systems give us an excellent mechanism: the assert( ) function. Simply compare the output of the UUT with the known answer(s) and assert that they're the same. If they aren't, the assert fails.

This has been my preferred way of testing for quite some time. But I couldn't use it with the Borland compiler, because that IDE doesn't handle asserts properly. The assert raises an exception, but the compiler has no handler for it, so the test just plain crashes, with no indication where or why.

The Microsoft compiler does it right. It catches the exception, showing you which assert failed. From there, it's pretty easy to figure out what went wrong.

I've learned one other lesson from Jim:

    Test drivers should be decidedly non-verbose. You don't need lots of messages that say, “Ok, now I'm starting to test unit X. It passed the first test. It passed the second test . . . .” If the test driver gets to the end without failing an assert, it's correct. In this case, no news is good news.

Because I can now use the assert mechanism, I've completely rewritten the test driver to use the “silent regression test” approach. I should mention that comparing two floating-point numbers is risky because of the potential for roundoff errors. In many cases, it's not a problem, but it's a good idea to have an IsAlmostEqual( ) function handy.

More functions
I've fleshed out the vector class with all the functions I think are needed to make it a production-quality class. I'll describe them briefly here; you'll find the full listings at

I mentioned the constructors in my last column. I have four of them:

Vector();Vector(const Vector &a);Vector(const double a[ ], size_t n = 3);Vector(const double x, const double y, const double z = 0.0);   

The first one is, of course, the default constructor. It's the one that the compiler will invoke if it's building new instances of the object–perhaps arrays of them. Because the compiler can't know in advance what size the vector should be, Vector( ) creates a null object. Remember that you must make it real with a call to Vector::Init( ) .

The second constructor is the copy constructor–one of the canonical set of functions you should include for all classes. It's the one that lets you write:

Vector a;Vector b(a);   

The third constructor creates a new vector from an ordinary array. This one doesn't need initializing since both the size and content of the vector are given.

The fourth is specialized to creating 3-vectors, which are so ubiquitous in physics and engineering. The default value of z lets me work with 2-vectors. Instead of having to convert between 2-vectors and 3-vectors–something I've done a lot of in the past–this constructor simply promotes all 2-vectors to 3-vectors. After all, it's not as though we're hurting for RAM space.

To create a new 3-vector from its scalar components, write:

Vector(x. y. z);   

To create a 2-vector, simply write:

Vector(x, y);   

The constructor quietly converts the data into a 3-vector whose z -component just happens to be zero. I wish I'd thought of this trick decades ago.

The math functions
In many ways, the math operations are the easiest part of the vector class. Remember, there are only four operations allowed: addition, subtraction, and two kinds of multiplication. In writing these functions, I've leaned heavily on the lower-level operations built into the file, SimpleVec.cpp .

In my last column and shown in Table 1, I gave you a sort of hierarchy of functions based on efficiency.

As you can see, functions that operate on vectors in place, like += , are more efficient than the infix (two-argument) operators like + . The latter require temporary values; the former don't. Here are the implementations for the addition operator:

// Vector add in placeVector & Vector::operator +=(const Vector &a){   vAdd(p, a.p, p, sz);   return *this;}// vector additionVector operator +(const Vector &a, const Vector &b){   Vector retval(a);   retval += b;   return retval;}   

As usual, we can debate whether we want to squander efficiency by calling the Simplevec function, vAdd . It's only got two executable lines of code, so we can certainly place the code in line if we choose to.

The dot product operator uses the “*” character:

// dot productdouble Vector::operator *(const Vector &a){   assert( == sz);   return vDot(p, a.p, sz);}   

In general, the compiler enforces the rules relating to infix operators. That is, it interprets the statement:

s = a * b;   

as equivalent to:

s = a.operator * (b);   

Alternatively, we can choose to define friend functions in which both arguments are given explicitly. For example, we can define the friend function:

double operator *(const Vector &a, const Vector &b){   assert( ==;   return vDot(a.p, b.p,;}   

The last arithmetic operator we need is the cross product. For this operator, I've chosen operator ^ . In this case, operator ^= is not useful. The result of a cross product can't be the same as either of its input arguments. The code is:

// cross product// input vectors may be dimensioned more// than 3. only the first three elements will // be used Vector operator ^(const Vector &a, const Vector &b){   assert( >= 3);   assert( >= 3);   double retarray[3];   vCross(a.p, b.p, retarray, 3);   Vector retval(retarray);   return retval;}   

Note the use of the symbols >= in the size tests. You can invoke the cross product operator with vectors of size greater than three. The cross product will still work, using only the first three elements of each argument. Although the value of being able to do this is obscure, there is at least one nontrivial case where it's useful.

That's all the room we have for this month. Next time, I'll discuss the remaining operations such as comparison operators and unit vectors.

Now it's time for the cliffhanger/teaser. Remember that I have no mechanism for defining arrays of vectors and telling the compiler what size they should be. I can't, for example, write:

Vector M[4][5](3);   

The compiler syntax simply doesn't exist. I could have chosen to give the default constructor a default size, as in:

Vector::Vector(sz = 3);   

But what if length three is not what I want? I have no mechanism for undoing the size assigned by the compiler, short of deleting the allocated storage of each vector and reallocating it. That's why I introduced the Init( ) function. Remember, it's only needed for the case of the default constructor, not the other three constructors.

I'm not exactly thrilled by the need for function Init() . I'll bet you aren't, either. Just recently, I thought of an approach that solves all the problems and makes the need for Init( ) disappear. What's more, the solution is simple and straightforward and imposes virtually no burden for most applications, nor does it require changing any of the existing member functions.

How does it work? Ah, that's the cliffhanger part. To see it, you're going to have to wait for the next column. See you then.

Jack Crenshaw is a senior systems engineer at General Dynamics and the author of Math Toolkit for Real-Time Programming. He holds a PhD in physics from Auburn University. E-mail him at For more information about Jack click here

Reader Response

I really admire the clarity of your presentation! It's like a textbook example should be!!Thanks a lot!

Joseph Manlius

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.