The boss needs a schedule for the new project. By Friday. Somehow engineering has to design and decompose a system containing perhaps thousands of functions in three days.
That's impossible, of course, so we're lucky to even rough out any design before taking stabs at estimating hours. “Uh, that display handler looks pretty big; better figure three months on it alone.”
Fifty years of software engineering has taught us that people can't estimate a function larger than around 50 lines of code with any accuracy. The bigger the chunk the worse our guess. Unless you can estimate by analogy by comparing the new system to one that's quite similar, one for which you've kept productivity records accurate scheduling means first doing a detailed design, down to the function level.
Fact is, only about three companies in the universe actually do such a detailed design before creating a schedule. And two of them are on Beta Gamma 4.
Though we complain and demand more time to create a meaningful estimate, in most cases the boss helpfully provides the end date. “We need it by the show in June.”
“Yes, massa,” we humbly submit, and trudge off to our cubicles to fire up Microsoft Project. Following our unerring instinct for job preservation the entire department jiggles triangles around, all the time leaving the most important one the end date solidly welded to June 1. What are these people doing? Why they're creating a schedule that looks somewhat believable, at least in the early phases of the project, even though not one of them has the slightest faith in any of the estimates.
Like the teenager who hides Friday's report card to avoid ruining the weekend, they're postponing the day of reckoning, hoping for a miracle. Mary appears at Lourdes more often than software projects experience a wondrous recovery.
Even when the boss doesn't assign a capricious date our estimates are woefully off since software is intrinsically tough to predict. Some wise developers, scarred from repeated battles in the firmware wars, multiply their best guess by 2. It's a cynical approach that demeans our professionalism. Yet it's often unerringly accurate.
More accurate than I had imagined. Last month the GAO testified before Congress about the state of two big fighter aircraft programs. Both the F/A-22 and Joint Strike Fighter (JSF) are behind schedule and appallingly over budget. In 1986 projections put the F/A-22's development costs at $12.6 billion. Today the GAO projects $28.7 billion. The schedule has grown from 9.4 years to an estimated 19.2 today.
If instead of an army of accountants stifled by unending Congressional inquiries, a grizzled old embedded systems developer had taken the original projections and simply doubled them per the rule of thumb he would have been off by only 12% in dollars and 2% in time.
With 1.3 million lines of code buried in the fighter it makes sense that an embedded developer should estimate the F/A-22 instead of CEOs and financiers. After all, that code is responsible for a big chunk of the slippages. The avionics were originally required to run for 20 hours between crashes. Such a lofty goal proved elusive so, adopting the delightful acronym MTBAA (Mean Time Between Avionics Anomalies), the DoD now specifies an MTBAA of 5 hours.
Problem is, as of today the code runs only 2.7 hours between crashes. Windows 3.1 looks good in comparison.
In 1996 estimates placed development costs of the JSF at $24.8 billion. Today the guess is $44.7b. Dare we say that a number near $50 billion, twice the 1996 projection, will be accurate?
The Agile community, drawing on Gilb's Evolutionary methods, recognizes the futility of scheduling. They iterate development, while constantly updating the delivery estimate. Original numbers are not much better than guesses, but those estimates refine as the product evolves.
It seems laughable, but the final schedule is rock solid on the day the product is ready. Which is exactly the same situation in big systems procurement, like the F/A-22 and JSF.
Jack G. Ganssle is a lecturer and consultant on embedded development issues. He conducts seminars on embedded systems and helps companies with their embedded challenges. Contact him at . His website is .
Actually, I find most man-hour estimates are accurate, the problem is the number ofcalendar hours to get that time in. Add to that the fact that it takes time to get your headback into a project after a major distraction (like that 2 hour progress meeting where youlistened to how the #6 internal tooth lockwasher on the framostat subassembly is only availablein stainless steel, increasing unit cost .3 cents, and why aren't you done yet?) along with theminor distractions (Fred or Ema's birthday cake ceremony after lunch) and it's easy to see howdoubling the man-hour estimates isn't quite enough. If we then factor in the specificationchanges that happen (well, we are so late anyways, why not add this feature?) you learn to pileon an extra 20% on top of the already doubled schedule time. But let's face it, companies do notwant accurate schedules. They want code faster and cheaper. Unfortunately, they will never dowhat is required, mainly, locking in the specs, locking out the distractions, and laissez fairenous.
– Brad Stevens
“… multiply their best guess by 2. It's a cynical approach that demeans ourprofessionalism.”
I disagree. I multiply my estimates by 4, and I am usually very close. The problems I faceevery day are as follows …
“Oh, we changed the interface protocol …”
“Oh, we moved the image buffer and changed the format from 32 to 16 bits”
“Oh, the new test module uses a different data sequence.”
If people new what they wanted from the beginning, could document it, and then stick to it,software would be done on time.
It is like saying a want a new car, how much will it cost. Once I specify my options, and adealer says “your car is ready”, should I be surprised when I say “now I want A/C and powerlocks, and change the color to.”, and the dealer says “that will take another 2 weeks”?
– Edward Handrich
The real problem of developing accurate estimates of time and cost is that the projectwould probably never have been approved if known at the outset. Once a project is underway,sinking more time and money into the project always seems better than discarding what has alreadybeen expended. Unfortunately, this has become a way of life for most of us, even if we knowbetter.
– Michael Weisner
There are ways of improving schedule estimates, but they require measurements of pastprojects and simultaneous use of multiple estimation methods. Most organizations don't collecteffort vs. size data on past projects, so they have no way of predicting effort on futureprojects. Barry Boehm's books on COCOMO and COCOMO II describe formulas for predicting effort andschedule but they must be calibrated to the individual organization. Our problem isn't thatresponsible project management is impossible, it is that we are not trained as project managersand we don't use the tools and techniques that are available. Since we have no data to back upour estimates we submit to management bullying and agree to ridiculous schedules that are set bymarketing and sales.
– Steve Hanka
Those end dates are usually not “capricious” at all, especially if you're making consumer products. Ifthe demand is highest during the Christmas season, there is a real cost in failing to get it on storeshelves by November!
The main trick is to train the marketing folks to tell you what they want in time for you to actuallybuild it. It also helps to have a well-stocked code library…
– Mark Lavelle
I was about to quibble with the following statement “Fifty years of software engineering has taught usthat people can't estimate a function larger than around 50 lines of code with any accuracy” but then Iread “Unless you can estimate by analogy – by comparing the new system to one that's quite similar…”
A few years back, I was head of a large, but experienced team. We were asked how long it would take todo a large and complex project (~3M LOC). The core of the team had worked together on similar projects.We stuck our finger in the air and said “We think this will take 2 1/2 to 3 years before we can shipthe first one.” (We also did a more formal estimate using Cocomo. Result: 2 1/2 to 3 years.) This,unfortunately, was the wrong answer. Management declared, “Thou shalt deliver in 1 1/2 years.” “Yes,massa”, we said. 2 1/2 years later, after requirements creep, interface changes, reorganizations,experts brought in to “fix” the problem, and replanning meetings too numerous to count, we successfullyshipped the first reduced content version in (drum roll, please) 2 1/2 years. The full content versionshipped 6 months after that.
(One wonders whether the team could have improved on the date if they had been left alone to execute,but this is an experiment that will never tried. Prevailing management wisdom says “If we give in toengineering's 'gloom and doom' estimate, it will become a self-fulfilling prophecy.” The same managerswill, in the next breath, say that “predictibility is very important.”)
In short, I have found that, if you have an experienced team knows its stuff _and_ if (this is a bigif) you can keep management from “influencing” the result, it is possible to give estimates that areremarkably prophetic. Sadly, this combination only exists on Beta Gamma 4.
Epilogue: Where is this team today? They are split up and working for other companies after beingreplaced by a inexperienced team in Canada that had lower labor rates. Estimating performance of thecurrent team? Take their estimates and multiply by 2.
– James Thayer
When I was estimating time to code completion at Rover Group, I took my initial estimate, doubled it, and moved it up an order of measurement (minutes become hours, hours become days, days become weeks, weeks become months). So an initial estimate of 2 hours became 4 days; 1 week became 2 months. I was usually on the high side, but management found it refreshing that I could consistently beat the schedule, if only by a small margin. The reason for needing this adjustment was mostly that writing the code was just the beginning. The review and test would always take longer than the code, usually by at least an order of magnitude. My initial estimate was for writing the code, and the adjustment covered the subsequent activities.
– Paul Tiplady