Ken Beck makes a startling claim on page 23 of eXtreme Programming Explained. He says that it is or at least can be very cheap to change software, even in big systems that have been deployed for years. According to him this is the “central technical premise of eXtreme Programming.” Yet this flies in the face of decades of research that shows changes made late in the life cycle cost orders of magnitude more than those done early-on.
Beck and other agilers believe that XP's practices intrinsically lead to cheap changes.
Sure, some changes are indeed very cheap. Need to invert the sense of an input bit? That's a ten-minute job if a driver insulates the peripheral from the rest of the code. Many other changes, such as altering the text of an error message, are equally trivial.
But the embedded world is the realm of limited resources. When we're down to the last few bytes of ROM even modifying that error message might take weeks as we fiddle with, well, everything to free up a bit of memory.
Performance-bound applications lead to similar headaches. Need just a few more CPU cycles to slightly enhance a feature? Development times can soar. The rule of thumb is a 90% loaded system doubles development time (over one at 70% or less). Figure on tripling the schedule in a system that burns 95% of the processor's time.
Sometimes the tiniest change can have huge repercussions. Suppose an input is running into Nyquist limits. A quick edit of a timing parameter doubles the sampling rate except that may bog down other performance-bound parts of the program. If the A/D can't handle the increased speeds a hardware respin might be in the works. And where will you store the extra data? How much more time will the analysis code take to process the supersized buffer?
Twenty years ago an outsider recommended using an RTOS on a system I was building, but of course I knew better. As the project grew, interrupts, timers, and a plethora of OS-like functions sprouted, till it became painfully clear that only an RTOS would unsnarl the convoluted mess. The cost to rip out my mistakes and shoehorn in an operating system ate all of our profits on that job. A cheap change? Hardly.
Sure, it's usually easy to edit a function. That's like telling the architect to add a window to a room. Ask him for a mansion on a 10 x 10 lot, though, and expect soaring costs and schedules.
A great design is one that's malleable. Reasonable changes drop in without massive restructuring. But when the code grows organically, without a design, (or, as Beck puts it: “the larger the scale, the more you must rely on emergence”) modifications become ever more dangerous.
XPers mitigate risks by a laser-focus on constant tests run automatically to validate each change. I wish most of us adopted their philosophy of checking everything, all the time, and of writing tests as we build the system's code. Most traditional test techniques only exercise half the code, a rather horrifying statistic when one considers the size of today's programs. A pretty good team will have around a 5% error rate. In a 100,000 line program that's 5,000 bugs. Normal testing strategies ensure half are shipped with the product. I do believe that XP's approach can ameliorate that significantly.
But testing alone doesn't lead to great products. Neither does constant refactoring and unending tinkering with the code. It's fun to edit, recompile and test, which is one reason XP is so seductive. It appeals to the puerile programmer in all of us.
There is a lot to like about XP. But I shudder whenever someone chants “just change it, run some tests, and see what happens.”
(For an interesting take on agile methods, see “People Factors in Software Management: Lessons From Comparing Agile and Plan-Driven Methods” by Barry Boehm and Richard Turner.)
Jack G. Ganssle is a lecturer and consultant on embedded development issues. He conducts seminars on embedded systems and helps companies with their embedded challenges. Contact him at . His website is .
Its easy to find problems with a methodology, or anything else for that matter. The real challenge is isfinding a solution. Jack, you've been developing embedded software for quite a while. What is your methodology? Whydoes it (or doesn't it) work?
– Steve Love
Jack replies: Steve, great question. I haven't the time to go into it in gory detail here,but will consider writing it up for a future article. The answer is “it depends.” If I'm doing a bit of Unix scripting or building CGI-BIN code I'll mostly hack at it. Fact is, in those situations either I won't knowwhat I really want till I see something, or just have to play with those baffling regexes for a while to figure themout.
But for real projects, wherethe outcome is a product, I use a combination of Feature Driven Development(see “A Practical Guide to Feature-Driven Development” by Palmer and Felsing, a fantastic book) and Tom Gilb's Evolutionarydevelopment model. On top of these I layer rigorous use of firmware standards and work product inspections (requirements,design, code, even the silly manual gets inspected). And I've pirated XP's approach to testing, or at leastas much as is practical for embedded systems.
well, look at the examples you listed:”we're down to the last few bytes of ROM”, “an input is running into Nyquist limits”, ” Need just a few more CPU cycles to slightly enhance a feature”, to use an RTOS, etc. All those examples are not very usual, even in the embedded world, needless to say they are rare outside of the embedded world.
I don't think XP discourages a good design, plan-driven methods and XP both encourage good design, they are just emphasizing different things, plus, you said long time back: we need to have margins in our design.
the examples you listed in your article does not have much margin, therefore the changes would be painful and expensive.
The big bugaboo I've always had in my embedded projects is testing, because the stuff I build generally has half of the program's I/O terminating on sensors or motors. With so much hardware in the loop it's often very difficult to feed in good test vectors.
If you have suggested ways of approaching this issue, I'd love to hear it.
– Tim Wescott
Automated PC-based test programs (eg JUnit) simplify code development through testsuites.
With embedded development (C/asm) I develop multiple test programs, but the maintenance becomes too expensive.
I think the difficulty of automatically and quickly running full testsuites on a resource limited system is the largest hurdle to embedded XP.
How do others keep testsuites current without having the testing overwhelm the design process?
I understand your points, but I think you're illustrating your ignorance of XP. The colloquial tone in which it's described often leads to such misunderstandings. Your closing quote is typical of such misunderstandings.
This feedback form doesn't encourage a full discussion, but I'd like to make a couple small points.
1. XP doesn't get you off the hook WRT thinking ahead. If you wait until you have a big mess before refactoring, it's a lot of work. Better to refactor when it's a small mess. That requires that you notice the small mess, but it's also encouraged by having those tests backing you up as you refactor.
And there should be a design–it's just that you shouldn't implement parts of that design that you don't yet need. Design is part of thinking–you should do it all the way through the project.
2. You have to test the stuff that's meaningful–not just the stuff that's easy to test. It's another part of that thinking stuff.
You and I have both seen a lot of engineers who don't do enough thinking. No set of practices will overcome lack of thinking. XP will, however, enable some tremendous advantages when employed with thought.
– George Dinwiddie