On Mediocrity

The U.S. Declaration of Independence says “All [people] are created equal.” That’s very true, but we don’t all stay equal. Or, to paraphrase Animal Farm’s pig Napoleon, all people are equal, but some people are more equal than others.

I’m not talking about politics here, though that might be an interesting topic for another day. I’m talking about variances in people’s abilities to produce a product. More specifically, a high-quality software product.

Over the years, I’ve known a lot of software folk – some good, some bad, some mediocre. I’ve known a few who were scarily good, almost to the point of inspired, Twilight Zone class performance. I’ve seen a few who might have been better as cab drivers.

No, wait. That’s an insult to cab drivers.

Over the years, I’ve seen a lot of software – some good, some bad, some ugly. Most of it mediocre at best.

No, wait. According to Defense Department studies of the 1970’s, most of the software was never delivered, or was delivered but never worked. It’s most of the delivered, working software that was mediocre.

The question at hand today is: Is there a correlation between people and their products? Does quality software come from quality people, or can you get quality software from mediocre people? There are experts today who assert confidently that you can, if only we put enough procedures and methods in place.

I’m not so sure.. They’ve been saying that for a good 40 years, but I don’t see much evidence of it.

What is quality software?
Before we talk further about the relationship between software and the people who build it, we’d better define what we mean by quality software.

Many years ago, JD Hildebrand, then managing editor of Computer Language magazine, invited me to give a paper at the magazine’s conference. The paper was supposed to address software quality and how to get it.

It was a timely topic; the term was being widely bandied about at the time. Like user-friendliness, modularity, or flexibility, everyone claimed their software had it, but few bothered to define it or back up their claims.

It was familiar ground for me. A little earlier, my colleague Joe Philipose and I had done a similar paper on the term “User-Friendly Software .” ( “Toward a Friendly Environment,” Proc. Second Annual Phoenix Conference, IEEE Computer Society Press, Silver Spring, MD, March 14-16, 1983, pp.527-533 .

We noted that the adjective, “friendly,” was usually reserved for human interactions. So, we reasoned, user-friendly software should leave us with the same feeling we’d get from a friendly human being. Think of it as the ultimate Turing test.

I took a similar approach in defining software quality. I asked myself, what attributes do we see in other, more tangible products, that cause us to see them as having that elusive attribute known as quality? I looked at – among other things – cars, pianos, violins, cameras, and stereo equipment.

Think of the Rolls Royce, the Mercedes Benz, the Steinway piano, the Stradivarius. Nobody would argue that these products exude quality, though I do recall one member of the audience who was vocally offended that I didn’t include her Camry.

Parenthetically, I should note that she had a valid point. The Rolls is a quality product, no question about it. It’s also bloody expensive. Her Camry probably didn’t give back much in terms of feeling luxurious, but it provided quality at a reasonable price. That’s important.

There are those who claim, sometimes stridently, that quality software is software that meets its requirement specification. That’s a demonstrably inadequate definition. The specification for a car might say, among other things, “transport a driver and three passengers over a distance of 300 miles.” But a Ford Model A sedan can do that. A King Midget (Google it) could carry two.

It’s not enough to require that a car be able to negotiate, say, a highway cloverleaf at the posted maximum safe speed. We should also expect that, if you exceed that speed. the wheels don’t fly off. In fact, any definition of quality should include the behavior of a product when pushed well beyond its design limits. Squealing of tires is Ok. Turning turtle is not.

Longevity is an issue with consumer products. The Mercedes has it, and Mercedes from the 1930’s often bring auction prices over $1,000,000. So, clearly, do the Steinway and the Strad.

In the end, I came up with a handful of attributes that, to me, defined a quality product. They included

* Greatly exceed specified performance limits
* Respond rationally when pushed well beyond those limits
* Never, ever fail, even when given improper inputs
* Last.

This is clearly not an exhaustive list, but you get the picture.

Continuing with my paper, the next question was, how does one build a quality product?

As I thought about all the analogies – cars, pianos, etc. – I realized that a lot had to do with the people doing the building, and the way they felt about it. When you look underneath a Steinway, you see these exquisite wooden joints, made up from the highest quality woods. I don’t know this for a fact, but I strongly suspect that you would find the same kinds of exquisite joints, even in places that are completely invisible to the customer.

Why are those joints there, and built with such care? They’re there, I think, because the fellow who crafted the joints wasn’t just an assembly-line worker, working at union scale. He was a craftsman, an artisan. An artisan doesn’t create a beautiful joint to impress the buyer, or even his boss. He creates it because it’s the right thing to do. He takes pride in his work, and gets job satisfaction from doing it right.

How are you going to put THAT in a requirements spec?

The software crisis
Unless you’re over 50, you probably don’t remember the Software Crisis that rocked the U.S. Defense Department (DoD) in the 70’s. It was based on two trends.

First, someone looked into the trend of using software in DoD weapons systems, and was alarmed at how fast it was increasing. By 1980, they said, software acquisitions would be 90% of DoD’s entire procurement budget.

Someone else looked into the quality of the software in existing DoD programs, and was appalled. More than 50% of software procured by the DoD was never delivered at all. Another large chunk was delivered, but didn’t work. Yet another chunk worked, but was so difficult to use that it was abandoned.

In the end, only about 3% of DoD-procured software was actually delivered, worked, and was put to routine use.

The other side of the Software Crisis coin was the pool of available talent. With more software, you need more people to build it. And the universities weren’t turning out nearly enough to fill the demand.

Worse yet, as I noted earlier, not all programmers are created equal. Statistical distributions being what they are, the chances are good that most of the new programmers would not be superstars. Most would be mediocre. The greater demand for programers would likely mean hiring more of the mediocre folk.

So here was the Software Crisis in a nutshell: DoD was going to need more and more software, with much higher quality than it had got so far. To produce this software, it was going to need more programmers – more than the universities could produce. And since they were going to have to drain the labor pool to get those people, the quality of the people was likely to go down, not up.

There seemed to be only one solution to the Software Crisis: you have to get quality software from mediocre people.

Is that even possible? Aye, that’s the question.

The Methodology
Many groups in academia and industry looked into this problem, and did studies aimed at improving both software productivity and quality. The solution, they declared, was the methodology. More specifically, THEIR methodology.

The term “methodology” is a $10 word for “method.” It makes you sound more authoritative when you say it. The idea is simply to define a set of practices that can, if followed, give you a better chance of achieving that elusive thing called quality, even when the people are mediocre at best. As with the assembly lines, make the process of building software so regimented, so tightly controlled, that any idiot can follow it and be productive.

Many groups got research grants to develop and refine their particular methodologies. Some of the pilot projects showed really significant improvements in software productivity and reduced error rate – as much as a 10:1 improvement.

Unfortunately, these performances weren’t always realized when used by others. It seems that the people participating in the pilot program – usually graduate students – were enthusiastic rooters for the methodology, so they tended to work harder, and be more creative, than the average mediocre programmer.

Even so, most organizations adopted one methodology or another, and used them in their software programs. We’ve been using such methodologies to this very day.

One software guru put it this way: Imagine you’re starting at a new company, and your boss comes by and plops a 500-page document on your desk. He says, “This is our company methodology. Read it and study it; it’s what we’ll be using on this project.”

Recalling that the purpose of a software methodology is to get quality software out of mediocre people, what is the manager’s message? Isn’t it that he thinks you’re one of the mediocre ones?

Now, don’t get me wrong. I am not arguing against methodologies. I am by no means advocating that we go back to the bad old days of ad hoc development, programmers as rugged individualists, and completely disorganized processes. I’m just asking the question: Can even the best, most carefully crafted and religiously followed methodology produce quality software from mediocre people?

I’m asserting that it can’t. I’m further asserting that the more stringently we enforce the methodology, the more likely we are to get a mediocre product.

Is that Ok? Is “good enough” good enough? Maybe it is. But it’s still mediocrity.

Quality Assurance
Many organizations have independent departments responsible for Software Quality Assurance (SQA). I’ve worked with a few of them. I’ve chaired quite a few design reviews, and I’ve managed an SQA department. An independent SQA department is justified in order to relieve the software developers from conflicting interests.

Most software developers want their software to be better than mediocre, but they’re also faced with tight schedules, loosely defined specs, and critical managers. In the end, they may be focused more on keeping the boss of their backs, than insuring quality. An independent QA group can be better divorced from production schedules.

I’ve had mixed results from the QA folk. In one company (Honeywell), the QA group was absolutely wonderful. They saved my buns more than once. For one thing, they were experts at reading and understanding the various specifications, and the things we had to do to meet the contractual requirements on schedule.

Even more important, they were good software engineers in their own right. They actually desk-checked every line of our software, and pointed out many errors that the team members had missed. The team members didn’t always appreciate what they viewed as “snooping” into their software, but I sure appreciated it.

Not all SQA groups are so helpful. In all too many cases, the head QA guy is more bureaucrat than engineer, and often doesn’t even know programming. He’s more focused on checking off boxes in a questionnaire. Did you have the review on schedule? Did the following questions get asked? Have all the action items been completed? If he can say “yes” to all such questions, he can check the little boxes and declare the software to have quality.

What it really has, at best, is mediocrity. More about this in my next column..

(Jack Crenshaw is a systems engineer and the author of Math Toolkit for Real-Time Programming. He holds a PhD in physics from Auburn University. E-mail him at jcrens@earthlink.net. For more information about Jack click here)

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.