Arm yourself with one tool – one tool only – and you can make huge improvements in both the quality and delivery time of your next embedded project.
That tool is: an absolute commitment to make some small but basic changes to the way you develop code .
Given the will to change, here's what you should do today :
1. Buy and use a Version Control System.
2. Institute a Firmware Standards Manual. .
3. Start a program of Code Inspections. .
4. Create a quiet environment conducive to thinking.
More on each of these later on. Any attempt to institute just one or two of these four ingredients will fail. All couple syneristically to transform crappy code to something you'll be proud of. Once you're up to speed on steps 1″4, add the following:
5. Measure your bug rates. .
6. Measure code production rates. .
7. Constantly study software engineering.
Does this prescription sound too difficult? I've worked with companies that have implemented steps 1 to 4 in a single day. Of course they tuned the process over a course of months. That, though, is the very meaning of the word “process” something that constantly evolves over time. But the benefits accrue as soon as you start the process. Let's look at each step in a bit more detail.
Step 1: Buy and Use a VCS.
Even a one-person shop needs a formal VCS (Version Control System). It is truly magical to be able to rebuild any version of a set of firmware, even one many years old. The VCS provides a sure way to answer those questions that pepper every bug discussion, like “when did this bug pop up?”
The VCS is a database hosted on a server. It's the repository of all of the company 's code, make files, and the other bits and pieces that make up a project. There's no reason not to include hardware files as well—schematics, artwork, and the like.
A VCS insulates your code from the developers. It keeps people from fiddling with the source; it gives you a way to track each and every change. It controls the number of people working on modules and provides mechanisms to create a single correct module from one that has been (in error) simultaneously modified by two or more people.
Sure, you can sneak around the VCS, but like cheating on your taxes there's eventually a day of reckoning. Maybe you'll get a few minutes of time savings up front inevitably followed by hours or days of extra time paying for the shortcut.
Never bypass the VCS. Check modules in and out as needed. Don't hoard checked-out modules “in case you need them.” Use the system as intended, daily, so there's no VCS clean up needed at the project's end.
The VCS is also a key part of the file backup plan. In my experience it's foolish to rely on the good intentions of people to back-up religiously. Some are passionately devoted; others are concerned but inconsistent. All too often the data is worth more than all of the equipment in a building, even more than the building itself. Sloppy backups spell eventual disaster.
I admit to being anal-retentive about backups. A fire that destroys all of the equipment would be an incredible headache, but a guaranteed business-buster is the one that smokes the data. Yet, preaching about data duplication and implementing draconian rules is singularly ineffective.
A VCS saves all project files on a single server, in the VCS database. Develop a backup plan that saves the VCS files each and every night. With the VCS there's but one machine whose data is life and death for the company, so the backup problem is localized and tractable. Automate the process as much as possible.
Checkpoint your tools. An often overlooked characteristic of embedded systems is their astonishing lifetime. It's not unusual to ship a product for a decade or more. This implies that you've got to be prepared to support old versions of every product.
As time goes on, though, the tool vendors obsolete their compilers, linkers, debuggers, and the like. When you suddenly have to change a product originally built with version 2.0 of the compiler—and now only version 5.3 is available—what are you going to do?
The new version brings new risks and dangers. At the very least it will inflict your product with a host of unknowns. Are there new bugs? A new code generator means the real-time performance of the product will surely differ. Perhaps the compiled code is bigger, and no longer fits in ROM.
It's better to simply use the original compiler and linker throughout the product's entire lifecycle, so preserve the tools . At the end of a project check all of the tools into the VCS. It's cheap insurance.
When I suggested this to a group of engineers at a disk drive company, they cheered! Now that big drives cost virtually nothing there's no reason not to go heavy on the mass storage and save everything.
Step 2: Institute a Firmware Standards Manual .
You can't write good software without a consistent set of code guidelines. Yet, the vast majority of companies has no standards, no written and enforced baseline rules. A commonly cited reason is the lack of such standards in the public domain. (Appendix A, in “The Art of Designing Embedded Systems, Second Edition” includes one possible standard proposal. )
Step 3: Use Code Inspections.
Testing is important, but used alone will lead to products infested with bugs. Testing usually exercises about half the code. The solution is a disciplined program of code inspections.
Everyone loves open source software, mostly because of the low bug rate. Remember the open source mantra: “with enough eyes all bugs are shallow.”
That's what inspections are all about.
Step 4: Create a Quiet Work Environment .
For my money the most important work on software productivity in the last 20 years is DeMarco and Lister 's Peopleware (1987, Dorset House Publishing, NY). Read this slender volume, then read it again, and then get your boss to read it.
For a decade the authors conducted coding wars at a number of different companies, pitting teams against each other on a standard set of software problems. The results showed that, using any measure of performance (speed, defects, etc.), the average of those in the 1st quartile outperformed the average in the 4th quartile by a factor of 2.6.
Surprisingly, none of the factors you'd expect to matter correlated to the best and worst performers. Even experience mattered little, as long as the programmers had been working for at least 6 months.
They did find a very strong correlation between the office environment and team performance. Needless interruptions yielded poor performance. The best teams had private (read “quiet” ) offices and phones with “off” switches. Their study suggests that quiet time saves vast amounts of money.
Think about this. The almost minor tweak of getting some quiet time can, according to their data, multiply your productivity by 260%! That's an astonishing result. For the same salary your boss pays you now, he'd get almost three of you.
The winners – those performing almost three times as well as the losers – had the environmental factors:
Too many of us work in a sea of cubicles, despite the clear data showing how ineffective they are. It's bad enough that there's no door and no privacy. Worse is when we're subjected to the phone calls of all of our neighbors.
We hear the whispered agony as the poor sod in the cube next door wrestles with divorce. We try to focus on our work but being human, the pathos of the drama grabs our attention till we're straining to hear the latest development. Is this an efficient use of an expensive person's time?
Various studies show that after an interruption it takes, on average, around 15 minutes to resume a “state of flow” – where you're once again deeply immersed in the problem at hand. Thus, if you are interrupted by colleagues or the phone three or four times an hour, you cannot get any creative work done! This implies that it's impossible to do support and development concurrently.
Yet the cube police will rarely listen to data and reason. They've invested in the cubes, and they've made a decision, by God! The cubicles are here to stay!
This is a case where we can only wage a defensive action. Educate your boss but resign yourself to failure. In the meantime, take some action to minimize the downside of the environment. Here are a few ideas:
1) Wear headphones and listen to music to drown out the divorce saga next door.
2) Turn the phone off. If it has no “off” switch, unplug the damn thing. In desperate situations attack the wire with a pair of wire cutters. Remember that a phone is a bell that anyone in the world can ring to bring you running. Conquer this madness for your most productive hours.
3) Know your most productive hours. I work best before lunch; that's when I schedule all of my creative work, all of the hard stuff. I leave the afternoons free for low-IQ activities like meetings, phone calls, and paperwork.
4) Disable the email. It's worse than the phone. Your 200 closest friends who send the joke of the day are surely a delight, but if you respond to the email reader's “bing” you're little more than one of NASA's monkeys pressing a button to get a banana.
5) Put a curtain across the opening to simulate a poor man's door. Since the height of most cubes is rather low, use a Velcro fastener or a clip to secure the curtain across the opening. Be sure others understand that when it's closed you are not willing to hear from anyone unless it's an emergency.
It stands to reason we need to focus to think, and that we need to think to create decent embedded products. Find a way to get some privacy, and protect that privacy above all.
(When I use the Peopleware argument with managers they always complain that private offices cost too much. Let's look at the numbers. DeMarco and Lister found that the best performers had an average of 78 square feet of private office space. Let's be generous and use 100. In the Washington DC area in 1998 nice—very nice—full service office space runs around $30/square foot/year.
Cost: 100 square feet: $3000/year = 100 sq ft * $30/ft/year
One engineer costs: $120,000 = $60,000 * 2 (overhead)
The office represents: 2.5% of cost of the worker = $3000/$120,000
Thus, if the cost of the cubicle is zero, then only a 2.5% increase in productivity pays for the office! Yet DeMarco and Lister claim a 260% improvement. Disagree with their numbers? Even if they are off by an order of magnitude a private office is 10 times cheaper than a cubicle. You don't have to be a rocket scientist to see understand the true cost/benefit of private offices versus cubicles.
Step 5: Measure Your Bug Rates.
Code inspections are an important step in bug reduction. But bugs—some bugs—will still be there. We'll never entirely eliminate them from firmware engineering.
Understand, though, that bugs are a natural part of software development. He who makes no mistakes surely writes no code. Bugs—or defects in the parlance of the software engineering community—are to be expected. It's OK to make mistakes, as long as we're prepared to catch and correct these errors.
Though I'm not big on measuring things, bugs are such a source of trouble in embedded systems that we simply have to log data about them. There are three big reasons for bug measurements:
1) We find and fix them too quickly. We need to slow down and think more before implementing a fix. Logging the bug slows us down a trifle.
2) A small percentage of the code will be junk. Measuring bugs helps us identify these functions so we can take appropriate action.
3) Defects are a sure measure of customer-perceived quality. Once a product ships we've got to log defects to understand how well our firmware processes satisfy the customer—the ultimate measure of success.
But first a few words about “measurements.” It's easy to take data. With computer assistance we can measure just about anything and attempt to correlate that data to forces as random as the wind.
Demming noted that using measurements as motivators is doomed to failure. He realized that there are two general classes of motivating factors: the first he called “intrinsic.”
This includes things like professionalism, feeling like part of a team, and wanting to do a good job. “Extrinsic” motivators are those applied to a person or team, such as arbitrary measurements, capricious decisions, and threats. Extrinsic motivators drive out intrinsic factors, turning workers into uncaring automatons. This may or may not work in a factory environment, but is deadly for knowledge workers.
So measurements are an ineffective tool for motivation. Good measures promote understanding : to transcend the details and reveal hidden but profound truths. These are the sorts of measures we should pursue relentlessly.
But we're all very busy and must be wary of getting diverted by the measurement process. Successful measures have the following three characteristics:
1. They're easy to do. .
2. Each gives insight into the product and/or processes. .
3. The measure supports effective change-making. If we take data and do nothing with it, we're wasting our time.
For every measure think in terms of first collecting the data, then interpreting it to make sense of the raw numbers. Then figure on presenting the data to yourself, your boss, or your colleagues. Finally, be prepared to act on the new understanding.
Stop, look, listen. In the bad old days of mainframes, computers were enshrined in technical tabernacles, serviced by a priesthood of specially vetted operators. Average users never saw much beyond the punch card readers.
In those days of yore an edit-execute cycle started with punching perhaps thousands of cards, hauling them to the computer center (being careful not to drop the card boxes; on more than one occasion I saw grad students break down and weep as they tried to figure out how to order the cards splashed across the floor), and then waiting a day or more to see how the run went.
Obviously, with a cycle this long no one could afford to use the machine to catch stupid mistakes. We learned to “play computer” (sadly, a lost art) to deeply examine the code before the machine ever had a go at it.
How things have changed! Found a bug in your code? No sweat—a quick edit, compile, and re-download take no more than a few seconds. Developers now look like hummingbirds doing a frenzied edit-compile-download dance.
It's wonderful that advancing technology has freed us from the dreary days of waiting for our jobs to run. Watching developers work, though, I see we've created an insidious invitation to bypass thinking .
How often have you found a problem in the code, and thought “uh, if I change this maybe the bug will go away?” To me that's a sure sign of disaster. If the change fails to fix the problem, you're in good shape. The peril is when a poorly thought out modification does indeed “cure” the defect. Is it really cured? Did you just mask it?
Unless you've thought things through, any change to the code is an invitation to disaster. Our fabulous tools enable this dysfunctional pattern of behavior. To break the cycle we have to slow down a bit.
EEs traditionally keep engineering notebooks, bound volumes of numbered pages, ostensibly for patent protection reasons but more often useful for logging notes, ideas, and fixes. Firmware folks should do no less.
When you run into a problem, stop for a few seconds. Write it down. Examine your options and list those as well. Log your proposed solution (Figure 6.2 below ).
|Figure 6.2: A personal bug log|
Keeping such a journal helps force us to think things through more clearly. It's also a chance to reflect for a moment, and, if possible, come up with a way to avoid that sort of problem in the future.
Identify bad code. Barry Boehm found that typically 80% of the defects in a program are in 20% of the modules. IBM's numbers showed 57% of the bugs are in 7% of modules. Weinberg's numbers are even more compelling: 80% of the defects are in 2% of the modules.
In other words, most of the bugs will be in a few modules or functions . These academic studies confirm our common sense. How many times have you tried to beat a function into submission, fixing bug after bug after bug, convinced that this one is (hopefully) the last?
We've all also had that awful function that just simply stinks. It's ugly. The one that makes you slightly nauseous every time you open it. A decent code inspection will detect most of these poorly crafted beasts, but if one slips through we have to take some action.
Make identifying bad code a priority. Then trash those modules and start over. It sure would be nice to have the chance to write every program twice: the first time to gain a deep understanding of the problem; the second to do it right. Reality's ugly hand means that's not an option.
But, the bad code, the code where we spend far too much time debugging, needs to be excised and redone. The data suggests we're talking about recoding only around 5% of the functions—not a bad price to pay in the pursuit of quality.
Boehm's studies show that these problem modules cost, on average, four times as much as any other module. So, if we identify these modules (by tracking bug rates) we can rewrite them twice and still come out ahead.
Step 6: Measure Your Code Production Rates.
Schedules collapse for a lot of reasons. In the 50 years people have been programming electronic computers we've learned one fact above all: without a clear project specification any schedule estimate is nothing more than a stab in the dark.
Yet every day dozens of projects start with little more definition than “well, build a new instrument kind of like the last one, with more features, cheaper, and smaller.” Any estimate made to a vague spec is totally without value.
The corollary is that given the clear spec, we need time—sometimes lots of time—to develop an accurate schedule. It isn't easy to translate a spec into a design, and then to realistically size the project. You simply cannot do justice to an estimate in 2 days, yet that's often all we get.
Further, managers must accept schedule estimates made by their people. Sure, there's plenty of room for negotiation: reduce features, add resources, or permit more bugs (gasp). Yet most developers tell me their schedule estimates are capriciously changed by management to reflect a desired end date, with no corresponding adjustments made to the project's scope.
The result is almost comical to watch, in a perverse way. Developers drown themselves in project management software, mousing milestone triangles back and forth to meet an arbitrary date cast in stone by management. The final printout may look encouraging but generally gets the total lack of respect it deserves from the people doing the actual work. The schedule is then nothing more than dishonesty codified as policy.
There's an insidious sort of dishonest estimation too many of us engage in. It's easy to blame the boss for schedule debacles, yet often we bear plenty of responsibility. We get lazy, and don't invest the same amount of thought, time, and energy into scheduling that we give to debugging.
“Yeah, that section's kind of like something I did once before” is, at best, just a start of estimation. You cannot derive time, cost, or size from such a vague statement yet too many of us do. “Gee, that looks pretty easy—say a week” is a variant on this theme.
Doing less than a thoughtful, thorough job of estimation is a form of self-deceit, that rapidly turns into an institutionalized lie. “We'll ship December 1” we chant, while the estimators know just how flimsy the framework of that belief is.
Marketing prepares glossy brochures, technical pubs writes the manual, production orders parts. December 1 rolls around, and, surprise! January, February, and March go by in a blur. Eventually the product goes out the door, leaving everyone exhausted and angry. Too much of this stems from a lousy job done in the fi rst week of the project when we didn't carefully estimate its complexity.
It's time to stop the madness!
Few developers seem to understand that knowing code size—even if it were 100% accurate—is only half of the data absolutely required to produce any kind of schedule. It's amazing that somehow we manage to solve the following equation:
development time = (program size in line of code) x (time per line of code)
when time-per-line-of-code is totally unknown.
If you estimate modules in terms of lines of code (LOC), then you must know—exactly— the cost per LOC. Ditto for function points or any other unit of measure. Guesses are not useful.
When I sing this song to developers the response is always “yeah, sure, but I don't have LOC data what do I do about the project I'm on today?” There's only one answer: sorry, pal—you're outta luck. IBM's LOC/month number is useless to you, as is one from the FAA, DOD, or any other organization. In the commercial world we all hold our code to different standards, which greatly skews productivity in any particular measure.
You simply must measure how fast you generate embedded code. Every single day, for the rest of your life. It's like being on a diet—even when everything's perfect, you've shed those 20 extra pounds, you'll forever be monitoring your weight to stay in the desired range.
Start collecting the data today, do it forever, and over time you'll find a model of your productivity that will greatly improve your estimation accuracy. Don't do it, and every estimate you make will be, in effect, a lie, a wild, meaningless guess.
Step 7: Constantly Study Software Engineering.
The last step is the most important. Study constantly. In the 50 years since ENIAC we've learned a lot about the right and wrong ways to build software; almost all of the lessons are directly applicable to firmware development.
How does an elderly, near-retirement doctor practice medicine? In the same way he did before World War II, before penicillin? Hardly. Doctors spend a lifetime learning. They understand that lunch time is always spent with a stack of journals.
Like doctors, we too practice in a dynamic, changing environment. Unless we master better ways of producing code we'll be the metaphorical equivalent of the 16th century medicine man, trepanning instead of practicing modern brain surgery.
Learn new techniques. Experiment with them. Any idiot can write code; the geniuses are those who find better ways of writing code.
( One of the more intriguing approaches to creating a discipline of software engineering is the Personal Software Process, a method created by Watts Humphrey. An original architect of the CMMI, Humphrey realized that developers need a method they can use now, without waiting for the CMMI revolution to take hold at their company. His vision is not easy, but the benefits are profound. Check out his A Discipline for Software Engineering, Watts S. Humphrey, 1995, Addison-Wesley.
With a bit of age it's interesting to look back, and to see how most of us form personalities very early in life, personalities with strengths and weaknesses that largely stay intact over the course of decades.
The embedded community is composed of mostly smart, well-educated people, many of whom believe in some sort of personal improvement. But, are we successful? How many of us live up to our New Year's resolutions?
Browse any bookstore. The shelves groan under self-help books. How many people actually get helped, or at least helped to the point of being done with a particular problem? Go to the diet section—I think there are more diets being sold than the sum total of national excess pounds. People buy these books with the best of intentions, yet every year America gets a little heavier.
Our desires and plans for self-improvement—at home or at the office—are among the more noble human characteristics. The reality is that we fail—a lot. It seems the most common way to compensate is a promise made to ourselves to “try harder” or to “do better.” It's rarely effective.
Change works best when we change the way we do things. Forget the vague promises—invent a new way of accomplishing your goal. Planning on reducing your drinking? Getting regular exercise? Develop a process that insures you're meeting your goal.
The same goes for improving your abilities as a developer. Forget the vague promises to “read more books” or whatever. Invent a solution that has a better chance of succeeding. Even better—steal a solution that works from someone else.
Cynicism abounds in this field. We're all self-professed experts of development, despite the obvious evidence of too many failed projects.
I talk to a lot of companies who are convinced that change is impossible, that the methods I espouse are not effective (despite the data that shows the contrary), or that management will never let them take the steps needed to effect change.
That's the idea behind the “Seven Steps.” Do it covertly, if need be; keep management in the dark if you're convinced of their unwillingness to use a defined software process to create better embedded projects faster.
If management is enlightened enough to understand that the firmware crisis requires change—and lots of it!—then educate them as you educate yourself.
Perhaps an analogy is in order. The industrial revolution was spawned by a lot of forces, but one of the most important was the concentration of capital. The industrialists spent vast sums on foundries, steel mills, and other means of production.
Though it was possible to handcraft cars, dumping megabucks into assembly lines and equipment yielded lower prices, and eventually paid off the investment in spades.
The same holds true for intellectual capital. Invest in the systems and processes that will create massive dividends over time. If we're unwilling to do so, we'll be left behind while others, more adaptable, put a few bucks up front and win the software wars.
A final thought : If you're a process cynic, if you disbelieve all I've said here, ask yourself one question: do I consistently deliver products on time and on budget? If the answer is no, then what are you doing about it?
This article was printed with permission from Newnes, a division of Elsevier, Copyright 2008, from “The Art of Designing Embedded Systems, Second Edition” by Jack Ganssle. For more information about this title and other similar books, go to www.elsevierdirect.com.
With 30 years in this field Jack was one of the first embedded developers. He writes a monthly column in Embedded Systems Design about integration issues, and is the author of two embedded books: The Art of Designing Embedded Systems and The Art of Programming Embedded Systems. Jack conducts one-day training seminars that show developers how to develop better firmware, faster.