Reader comments to "How I write software" - Embedded.com

Reader comments to “How I write software”

Editor's note: Here are two of the longer comments we received from readers responding to Jack Crenshaw's column “How I write software.” Many more readers responded, but their responses are no longer available online on our new web site. At this time, I'm not sure when and if the older comments will be restored and brought over to the new web site. 
–S.Rambo
Managing editor
Embedded Systems Design magazine

Letter #1
I always enjoy reading Jack Crenshaw's column because we share the perspective that comes from extensive experience developing embedded software. I've been at it for 35 years, starting out like he did by feeding paper tape through a Model 33 ASR. We both migrated into the field from a science-mathematics-engineering background. I worked in applied physics for the first decade of my career. Computer science and computer engineering did not exist as majors when we graduated.

Despite our similar backgrounds, I respectfully disagree with most of his assertions in “How I write software” (June 2010, p.9). He “jump[s] right into the code”, starting by writing the null program. From there on, it's just a matter of filling in the details, right? Well, I contend that there are a few fallacies in his approach.

His article contains numerous instances of the first person singular pronoun, “I.” Nowhere do I see the first person plural “we.” Evidently he always writes his programs alone. I almost never do. I can count on one hand the number of projects that I've developed alone, with a few fingers left over. Perhaps as a solitary developer, he can start out with “void main(void){} ” but what if the three or five or 10 team members all start there? The project would quickly descend into chaos. What's more, I've seldom had the pleasure of starting a fresh project at all. The vast majority of my assignments over the years could be classified as maintenance–adding features, extending functionality, fixing errors, etc. Some of the programs that I've maintained probably started out with void main(void){} . Those [programs] are the hardest ones to fix; some simply must be discarded because every modification attempted has unexpected and unwanted side-effects.

I agree that rigid rules in the development process are undesirable and should be unnecessary, but a completely undisciplined approach is even worse. On the one hand, Jack Crenshaw states that “Law #43 says design it before you build it.” A little farther on, he says that “using my approach, the requirements evolve with the code.” Which is it–design first or build first?

Every engineering project can be divided into two phases–design and implementation. In every other engineering field, engineers do the designing and someone else does the implementation. Civil engineers don't pour concrete or drive rivets; mechanical engineers don't operate machine tools; chemical engineers don't plumb refineries; electrical engineers don't lay out circuit boards. But software engineers do their own implementation–it's called programming. That's both a blessing and a curse. It's a blessing because we don't have to wait days or weeks or months for our creations to be realized; we are able to see our designs operating almost immediately. It's a curse because there's a great temptation to short-change the design phase and go immediately to the implementation (jump right into the code). Generally when a person starts programming, he or she stops thinking. Good design gives way to trial and error. I'm prone to this myself, because programming is more fun than designing. Strong self-discipline is required to resist the temptation. I am convinced that I achieve the best results in my projects by postponing programming as long as possible.

Jack and I have both seen a lot of programming fads come and go, sort of like clothing fashions. The latest are spiral development, agile and extreme programming. They, too, will fade in time because they cannot replace fundamentally sound engineering design. Like Jack, I also test my program modules as I develop them, but I don't delude myself into thinking that testing substitutes for design. Which Mars probe was it that crashed because two modules assumed different force units? I suspect both modules were thoroughly tested; apparently the high-level design, if it existed, wasn't reviewed. I have often heard a poorly written program defended because “it works.” Does that mean that the expected inputs produce the expected output? What about the unexpected inputs? A back-of-the-envelope calculation quickly leads one to conclude that it's impossible to test all conceivable inputs. For even a simple module, the combinations are astronomical. I used to display a placard in my cubicle (when I had a cubicle) with a quote from Edsger Dijkstra that says “Testing proves the presence, not absence, of bugs.”

About seven years ago two colleagues and I embarked on the development of a new product for our employer. The three of us agreed at the outset that we would write and review a design document for each principal module before beginning coding. This was not an elaborate formal waterfall process, but it did cause us to think through the designs before investing a lot of effort in programming. The result was by far the most successful product that the company has ever developed. Sales have exceeded forecasts by an order of magnitude, and at this writing it remains the company's flagship product. Furthermore, the software architecture that we jointly developed has been used as the foundation for no fewer than five other significant products. Many code modules have been reused with little or no modification. Others have been easily extended to meet the added requirements of later products. We worked together so closely as a team that it was difficult to tell where one person's work ended and another's began. We would never have achieved that success if we had “started with code.”

As I wrote once before to the editor, the magazine originally entitled Embedded Systems Programming is now named Embedded Systems Design. When it is finally renamed Embedded Systems Engineering, it will have arrived.

–Jim Berry
Principal, Epsilon Corporation
La Grange, IL

I suspect Jim and I are much closer in our ideas than he may think. We seem to have similar backgrounds and have walked down some of the same roads. Our differences come mainly from the title of the column: “How I write software.” I meant that literally. With apologies to Red Skelton, I meant “How I (me, Jack Crenshaw, not someone else or a project team) write (program, code, not specify, design, test, or validate) software (not requirements analysis, mission or design specs, or test plans).”

If it seemed that I “jump[ed] right into the code,” it's because that's what the column was about. (This month, I write about testing in a similar style.)

Speaking of style, I do write in the first person singular. Mea culpa. It's my style. It's who I am, it's what I do. I'm getting too old to change now. Does this mean that I've never worked as part of a team? Not a bit of it.

Just to be clear, I wasn't advocating that other members of a coding team start a project as I would. I'm not going to tell my team members how they should start on a new project, but the “Hello, world” approach works for me. It helps me put down my design hat and don my coding hat. Why is that a problem? Even if a team of programmers all decided to follow my personal trick, what's the harm in that? Think about it. How long does it take [to write a small snippet of code]? What if we all did it? I disgree that the project would devolve into chaos just because we performed an initial mental exercise to get the coding juices flowing.

Nor was I advocating a “completely undisciplined” approach. I'm was just talking about my approach to implementation.

I disagree that a person stops thinking when he or she starts programming. Programmers don't stop thinking but just think differently. Designing and coding use different sets of neurons. The thought processes are quite different. That's why, in other writings, I've recommended that, even on single-person projects where you're the designer, developer, and perhaps even customer for the product, it's a good idea to stand up, walk away from the computer, go to a different room, and think.

Now, I've known lots of programmers who claimed they could design at the keyboard, while they're coding. I've also seen their code.

I do indeed disagree with Jim that “programming fads . . . spiral development, agile and extreme programming . . .” are fads that will fade away. He seems to favor the classical, “waterfall diagram” approach to product development, which I think is completely discredited. I do agree that more modern methods carry more risk of undisciplined hacking. I also agree that such methods encourage hackers to hack with even more gusto.

But the solution, in my opinion, is not to trash the methods. It's to get rid of (or retrain) the hackers.

Bottom line: a software project requires a lot of skills and a lot of coordination between phases. In the column “How I write software,” I wasn't talking about all of them. I was talking about writing software.
–Jack Crenshaw

Hello Mr. Crenshaw. I have been reading your column for years and I credit you with my philosophy of having a toolbox of routines, algorithms, methods etc. at my disposal for use when needed–like the mechanic who reaches into his toolbox for a wrench when he needs to tighten a bolt.

There have been a lot of fads in the software industry and I think it is because we are looking for the tool that can do it all, even though all problems are different or one size doesn't fit all. Which brings me to the point of why I am writing, I take exception to the following statement you made in the article.

“The waterfall approach is based on the idea that we can know, at the outset, what the software (or the system) is supposed to do, and how.”

There are projects where we do know, at the outset, what the software, and system, is supposed to do and how. For example, I worked on the development of a Model 4 version of an industrial machine. It was very clear what this machine had to do, the same things as its predecessors, Model 1, Model 2, and Model 3.

So why was there a Model 4? Technology, performance and cost. The technology keeps advancing providing better performance and reduced cost. So there is a big gain to be had by changing motors, drives, controllers, etc.

Granted we should have been able to re-use a lot of the software, but machines designed in the '80s weren't necessarily the benefactors of modern software design principals. In this particular case, there had been three programming languages used over the four models: Basic, C, and C++, not including the three or four different plc versions, with unique ladder logic, that also spanned the models. And let's not forget about the effect management has on a project, not necessarily for the better.

Your toolbox is different from my toolbox is different from every other programmer's toolbox. So just because a tool doesn't fit in your toolbox, or you have as of yet had a need to use it, doesn't mean it won't fit in someone else's. Please don't take it that I am arguing for or against the waterfall approach, my point is that there is probably a little good in everything and it is up to us to determine when the tool fits the job at hand.

That's my two-cents worth. Keep writing the good articles, I enjoy the diversity and depth of the topics you cover!

Sincerely,
Dennis Tubbs

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.