A Considered Response To Nuts To OOP! - Embedded.com

A Considered Response To Nuts To OOP!


A Considered Response To “Nuts To OOP!” And The Last Word

by Michael Barr

Programming languages are interchangeable: any program can be implemented in any language. That's a fact of computing. But that fact doesn't make our work as programmers any easier. Selecting the right language for a particular project can mean the difference between a small program or a large one, an on-time delivery or a series of missed deadlines, a bug-free product or a bug-riddled one, a daydream project or a nightmare. Knowing which language to use for which program is half the battle. You've got to pick the best tool for each programming project you are assigned.

For example, if you were asked to develop a parser for a new grammar, what programming language would you use? In theory, you could write such a parser in BASIC, Pascal, C, C++, Java, or any of a hundred other languages. But which one would allow you to write the parser fastest, in the fewest lines of code, and with the fewest undetected bugs? To parse a grammar, I'd probably choose lex and yacc, a pair of tools that can turn simple text-file descriptions of a grammar and its effects directly into the C source code for its parser. This not only helps you get the job done as quickly as possible, but also helps localize future changes. If the structure of the grammar is later altered, you need only change the description of the grammar and regenerate the C program.

As embedded systems programmers, we typically have fewer programming languages in our arsenal. (I have yet to encounter the luxury of a Smalltalk compiler for the 8051.) Yet we're still faced with the frequent task of deciding between assembly, Forth, C, C++ (not to mention C with C++-style comments, C with classes, object-oriented C++, and Embedded C++), and a handful of other languages for the implementation of a particular piece of software. How then, do we make the right decision?

Too often the selection of a programming language involves neither reason nor fact. So I'm pleased to find that Mr. Niemann has based his criticism of object-oriented programming on actual experience with C++ and provided a concrete example for us to discuss. But perhaps we should view his specific criticisms of OOP and C++ from the perspective of improper (or at least sub-optimal) matching of programming methodology and language to programming project. Lex and yacc may be great choices for developing a parser, but they are horrible for device driver work.

Was object-oriented C++ the right choice for the problem Mr. Niemann's team was trying to solve? It isn't really possible for us to judge that from the limited information we have about their project. All we really know is that it was a piece of software embedded within an industrial controller. Without knowing the details of the hardware or what the software does, it's difficult for us to second-guess. Instead, I'll concentrate the beginning of my discussion on where OOP and C++ work best. Then I'll respond to some of the specific issues raised in Mr. Niemann's article.

Benefits of OOP

There's certainly a lot of hype associated with object-oriented programming, but that alone doesn't imply a lack of substance. In fact, we can infer from the success of OOP in the non-embedded world that there must be some tangible benefits that keep programmers (and language designers) coming back to it. Indeed, object-oriented programming has been around long enough that its benefits and trade-offs have been carefully studied and are now well understood.

Most of the benefits of object-oriented programming arise when the program is large or the code will have a long lifetime. That's because OOP offers easier debugging and code maintenance as its principal benefits. This does not mean that an object-oriented program is easier to read than a procedural one; rather, that the coupling between modules is significantly reduced. Changes in one module will not affect other modules unless a change is made to the public interface between them. So the implementation details of each class are separated from one another, and debugging can be concentrated at the class level.

In other words, during development (and later, when changes are made for maintenance purposes) you can treat each class as a black box and test it fully by exercising its public interface. This method is the same type of unit testing that hardware designers perform on their integrated circuits. Once it has been unit-tested, a chip or a class can be used as a building block in a larger system. This component can later be replaced by a functionally equivalent upgrade (in other words, one with the same public interface), without causing bugs in other parts of the system. This type of testing is usually not possible with a procedural program. A good programmer can modularize code in such a way that coupling is reduced; however, if it isn't enforced by the compiler, there can be no guarantee.

Because the terms “large program” and “long lifetime” are subjective, I should clarify my recommendations for an OOP payoff. By a large program, I mean mostly that there should be opportunities for inheritance (though lines of code and numbers of programmers may be important considerations as well). If you do a full object-oriented analysis and design—as you should before writing any program in an object-oriented language—and find that the tree of objects you'd expected looks more like a field of saplings, you'd probably be better off taking a simpler, procedural approach to the problem. Object-oriented programming works best when there are economies of scale—when similar or related objects share fields and methods, through inheritance. By allowing you to share code, inheritance will save you time during implementation and, in the long run, make your code easier to maintain.

The lifetime of a program is also an important consideration. Will the program require changes or enhancements in the future? Might it be ported to another processor or operating system at some point? If neither of these is likely, do whatever you can to get the program done in the allotted time. Then don't look back. If no one else will ever have to read or modify your code—yes, I know there are potential bug fixes to consider here, too—who cares how you implemented it? This is a very real possibility if you're developing software for a simple embedded controller with only a few thousand lines of code. Object-oriented programs typically require a longer design phase and are frequently more inefficient than their procedural equivalents. Why make the design and implementation of a program any more complicated than it need be?

Problems with C++

If you can determine that your project would benefit from implementation in an OO language, C++ may not be the best choice. Unfortunately, C++ suffers greatly from having been grafted onto a procedural language. It would be far better to use a language that forces you to stick with the OO paradigms throughout. A hybrid OO/procedural solution has the disadvantages of both—inefficiency, size, complexity, and so on—with none of the advantages of either. As Ian Joyner states in his “Critique of C++”:

Adoption of C++ does not suddenly transform C programmers into object-oriented programmers. A complete change of thinking is required, and C++ actually makes this difficult. Many of C's problems affect the way that object-orientation is implemented and used in C++. 1

Bjarne Stroustrup has written of the many problems that were encountered when trying to create his OO language from C. 2 Most of these were the result of trying to maintain backward compatibility with existing programs and standards. Such difficulties in the design and use of the C++ language highlight the advantage of abandoning existing technologies when developing new ones. The developers of Java took such an approach—choosing to create a fully OO language that borrows the best features of C's familiar syntax, rather than extending the entire language—and it has paid off handsomely.

Far better choices than C++ are available for implementing an object-oriented design. But, unfortunately, the most popular of these—Java, Smalltalk, and Eiffel come to mind—are not a part of the typical embedded programmer's arsenal. Of the three, Java is the most talked about in our community, but it is still in the early stages of availability for embedded systems. And it remains unclear whether it will ultimately be accepted.

The purpose of abstractions

An abstraction should hide something. Each level of a program should deal with higher-level concepts than the ones below it. For example, within most communications protocols are several layers of hardware (lowest) and software (highest). A physical layer is usually at the bottom. The physical layer describes the details of an abstract “pipe” through which packets of information from the upper layers pass, on their way to another system. Among other things, the physical layer is responsible for sending and receiving individual bits. But upper layers of the protocol stack don't deal in bits; they deal in packets. Therefore, the physical layer provides an abstraction that hides bits from the layer above. The upper layer sees only a pipe that supports the sending and receiving of entire packets. A packet is a higher-level concept than a bit, and thus, a worthwhile abstraction.

But what has been gained by the abstractions in Mr. Niemann's example program? The purpose of any class that wraps a hardware device should be to abstract the functionality of the device—that is, to hide the details of the hardware interaction. The common features of all serial ports should be extracted into a common interface. The details of interfacing to a particular serial controller should be hidden completely inside that class (or one derived from it). It's silly to waste time developing and debugging a piece of software that allows you to do something you can already do without that software.

At first glance, Mr. Niemann's Register class appears to be a good thing. In fact, my own first thought was that this abstraction was useful because it hides the size of the actual register. But I soon realized that it doesn't really do that. The data you pass to the operator still has to be of the same type as the declared register (or casted to it). So the programmer still has to know how wide the register is, if it is signed or unsigned, and so forth. It still takes just as many lines of source code—and additional processor instructions—to accomplish the task. This abstraction adds inefficiency to the process of modifying a register, with no benefit.

The DEVICE class offers no benefits either. It provides a higher-level interface that hides nothing of the interaction with the underlying hardware. Why create a middle man that does nothing but accept data for a particular register and hand it to that register? That's bad coding, and it's also the true source of Mr. Niemann's debugging headaches.

Two layers of software were created that accomplished nothing, and hid nothing except bugs. The complications of debugging the flaw in the logic were a direct result of these superfluous abstractions. In fact, had the writes to those device registers simply been written in straight C++ (without a wrapper class), there would have been no need for a return statement and therefore, no bug at all. An assumption was likely made at the beginning of this project that using classes and templates is always a good thing. The truth is that they are sometimes useful and sometimes not. Abstractions are only useful when they hide something.

Unfortunately, programmers who work closely with hardware are tempted to think of the devices in the system—serial controllers, LCD displays, and the like—as “objects” themselves. These programmers tend to write classes that reflect all of the ugly details of the hardware, without hiding anything from the software above. In fact, the most useful classes (and abstractions) represent ideals and generics. If the hardware designers later switch from a Zilog serial controller to an Intel, only one class should require changes; none of those changes should affect the public parts of the class. A well-designed DEVICE class provides a more generic interface than the hardware itself.

Myths and facts

In one section of his article, Mr. Niemann lists some purported myths and facts. I agree with his first three myths: Objects are not needed to (1) protect data, (2) group data and procedures, or (3) implement large programs. Objects are an option that is provided by some programming languages. Saying that you don't need objects is like saying you don't need any language other than assembly. Of course you don't, but increasing levels of abstraction often lead to programs that are easier to write, easier to verify, and easier to maintain. I don't write much code in assembler these days, nor do I shy away from using objects when they provide a useful abstraction.

In response to the fourth myth, I contend that well-written object-oriented programs are easier to debug and maintain than their procedural counterparts, rather than harder. And I think anyone who has ever made a change to one part of a procedural program only to have another, seemingly unrelated part of the program fail, would agree. Encapsulation and unit testing allow programmers to create self-contained building blocks for larger systems. The internals of these blocks can later be changed, without affecting any other part of the software. Object-oriented programming languages make this possible.

With respect to Mr. Niemann's final purported myth, I have yet to encounter any programmer who actually thinks the switch statement is bad. Inheritance is very useful in certain types of applications. And in those application domains, there is no good substitute for it—a switch statement would lead to an unnecessarily complicated solution. Unfortunately, the simple examples found in books, like the geometrical Shape base class and derived Circle and Square, are too trivial to fully illustrate the full benefits of inheritance and polymorphism. Such examples are meant only to teach the implementation details. Likewise, “Hello” + “world!” is a poor example of the power of operator overloading.

As for his facts, I disagree with all but the first. Object-oriented programming does require more time in the design phase. But it does not , as a general rule, (1) require more time to code a solution (in fact, the larger the program the more likely it is that it will take less time to implement), (2) result in a more complex solution, (3) result in code that is more error-prone, or (4) increase maintenance costs. Readers of Mr. Niemann's piece should be careful about taking these points away as facts. They are instead hastily drawn generalizations, more of the myths that I am afraid are far too often used to select a programming language for a given programming problem.

References

1. Joyner, Ian, “Critique of C++ and Programming and Language Trends of the 1990s,” www.progsoc.uts.edu.au/ ~geldridg/cpp/cppcv3.html .

2. Stroustrup, Bjarne. The Design and Evolution of C++ . Reading, MA: Addison-Wesley, 1994.

Michael Barr is the technical editor of Embedded Systems Programming. He has been developing embedded software for more than five years and has recently written a book entitled Programming Embedded Systems in C and C++ (O'Reilly & Associates). Michael can be reached via e-mail at .

Thomas Niemann responds

First, I want to thank Embedded Systems Programming for publishing my article. Not only did they do that, but they're allowing me to have the final word.

I agree with the underlying concepts of OOP. However, I believe that the actual use of this method leads to code that is difficult to maintain. The central culprit is information hiding. What is being hidden is information I need to find to successfully debug an application. Switching to another language will not remedy this situation.

I disagree with Mr. Barr's assertion that the Register class “accomplished nothing.” The class does hide and protect the address associated with the register. I note that this was exactly the information needed when debugging, and thus the class contributed to the $10,000 price tag.

My views on OOP are well known at work. Recently I rewrote some C++ code that was part of a controller. It consisted of two inherited classes, 40 functions, and 1,500 lines of code. The result is written in C and has no classes, six functions, and 300 lines of code. I find it remarkable that, when confronted with evidence to the contrary, progress in OO continues. We've just started a large development effort that will be OO-based. Rather than join that effort, I will keep maintaining existing software written in C.

Return to Table of Contents

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.