Keep It Clean - Embedded.com

Keep It Clean

Old fashioned air-cooled Volkswagons required an oil change and valve adjustment every three thousand miles. I change my sailboat’s diesel engine’s oil and all of the oil and fuel filters every 100 hours of operating time. Every season – if I think of it – I change the oil in that lawnmower I loathe so much.

Jiffy Lube has built an international company around providing this sort of preventative maintenance for cars. No one questions the need to regularly invest a bit of time and money to keep engines running well.

It’s a shame we’re not so farsighted in the firmware business.

Code is like the oil in an engine. When it gets dirty – nasty, convoluted, patched into an unmaintainable mess – it gums up the application. Eventually, things get so bad the system seizes – it can no longer be improved since the cost of dealing with the chaos is so high.

Many agile methods push continuous refactoring, or rewriting crummy code. Some suggest we rewrite all code that can be improved. Like world peace, it’s a great concept that isn’t really possible in the grimy trenches of getting a product out the door. Perfection, as desirable as it is, will never override all other considerations.

The second law of thermodynamics says that disorder increases in closed systems. Entropy increases. Programs are no more exempt from this depressing truth than the expanding universe. Successive maintenance cycles increase the software’s fragility, making each additional change that much more difficult.

Software is like fish. It rots. Over time, as hastily-written patch after panicked feature change accumulate in the code, the quality of the code erodes. Maintenance costs increase.

As Ron Jeffries has pointed out, maintenance without refactoring increases the code’s entropy by adding a “mess” factor (m) to each release. That is, we’re practicing really great software engineering all throughout a maintenance upgrade… and then a bit of business reality means we slam in the last few changes as a spaghetti mess.

The cost to produce each release looks something like: (1+m)(1+m)(1+m)…., or (1+m)n , where n is the number of releases. Maintenance costs grow exponentially as we grapple with more and more hacks and sloppy shortcuts. This explains that bit of programmer wisdom that infuriates management: “the program is too much of a mess to maintain”.

But many advocate starting release N+1 by first refactoring the mess left behind in version N’s mad scramble to ship. Refactoring incurs its own cost, r. But it eliminates the mess factor, so releases cost 1+r+r+r…, which is linear.

Luke Hohmann calls this “post release entropy reduction.” It’s critical we pay off the technical debt incurred in being abusive to the software. Maintenance is more than cramming in new features; it’s also reducing accrued entropy.

It’s sort of like changing the oil on a regular basis to keep things humming smoothing along.

Jack G. Ganssle is a lecturer and consultant on embedded development issues. Join him for his “Better Firmware Faster” seminar in Dallas and Denver in April. Contact him at . His website is .


Unfortunately, the same people who asked for the features that make the code look messy are always knocking on our door asking when the next one is going to be done and when we say we have to take time to “change the oil” we usually get an answer like “if that oil was good enough for grandpa, it's good enough for me”.

– tom mazowiesky


I was really glad to see you raise this point. We see it all of the time. There are two more observations that I would like to add. First, the most successful programs are the ones that get the most changes. They end up being in the worst shape. An unsuccessful project leaves its code frozen in a relatively good state. Ironic, but true. Second, there is always strong management opposition to refactoring. When an engineer proposes refactoring, he/she has to confess that “working” code will be altered in the process and that it is therefore possible to introduce bugs in previously vetted areas. This makes management crazy and usually kills the refactoring proposal.

– Jim Gibbons


so…

How many times have you replaced the main sheet, or the rigging on your boat?

The best results have been achieved when one really attempts to make an effort at 'future-proofing' the code/system/framework/architecture.

This does not mean loading the system with so many features that it is available to every single lurking possibility.

What this does entail is a different kind of thinking. The key here lies in the fact that one needs to start thinking about 'customer' of the code. In general, this could be the original author, (many months or years later), or other engineers.

A couple of key issues that arise with this form of thinking are:

1. Usability: the components must be usable in such a way that the developers of the future (your customers) are willing to use these components

2. Habits: components must be designed in such a way that said future customers can use their individual styles and practices, (with your crafted components), in a way that 'feels' comfortable for them.

Doing the above alleviates many 'refactoring' headaches, and allows people to quickly implement systems for new products via a simple extension method.

– Ken Wada


Of course you know why the code is never refactored. It's because management wants to make an incremental change at absolutely minimal cost. To minimize cost means to not touch the documentation (requirements, design, inteface). So the S/W Eng does the minimal they can. They have virtually no control. I have in my hands right now such a project to 'upgrade'. But on a second point, your article is preaching to the choir. Could you re-direct to a management level trade journal?

– Phil Gillaspy


Is there some way to pound this important point into managements thick head? I am stuck maintaining code that dates back to the mid-90's. It is clear from this code base that the original code was just fine. However, over the past ten years paper bag after paper bag has been stapled onto it creating an unmaintainable mess that is suffering an awful performance hit. In one case a last minute “feature” led to an unexpected bug, this forced a panic mode patch that in turn caused part of the “feature” to stop working, this in turn caused another panic patch. We know this because it is documented in the history file. The leason here is cost. This nonsense cost aslmost a year and God knows how many dollars. It seems that management prefers death by a thousand cuts to fix the damn problem and make it go away.

– Mike Wood


Re your recent article “A call for modern compilers” in embedded systems design I offer not a solution but a brief description of how I've been dealing with “source code” for at least 15 years. I had not thought of a description of it until I read your article.

Like you I've always found that “pure text” source code is downright nasty to work with and particularly non-communicative. Your article's description of the problems goes right to these points. And, like you, I've had to (had the pleasure to) work with multiple processors (high-level, embedded, DSP's, etc.), compilers, assemblers, and IDE's simultaneously on single projects as well as parallel projects. Swapping twixt mind numbing editors was difficult and those with some interesting feature only pointed out the shortcomings of the others.

Other good programmers have found specific non-aligned (third-party) editors that they use to produce all of their source code and naturally got me to try them. Every single one of these editors was, at best, only a compendium of the “best” features of other editor/compilers and IDE's. The most interesting, a modern version of vi, was itself “programmable” so that specific test strings (keywords, etc) would be rendered in a different color, or font, or italicized, etc. But bluntly still mind numbing to me.

So perhaps 15 years ago, when Microsoft Word finally became easier to use than Scripsit, I “solved” my source code issues by, in affect, adding another layer to the whole process. I.e., –

I create the “true source” code in MS Word where I provide a consistent overall header area that includes any/all pertinent information with all of the needed, and useful, background, purpose, change, uses, affects, support, etc., info regarding each source segment. Then in the body along with the code I can use whatever appropriate font, color, underscore, yadi, yadi, yada, comments to fully (really fully) make clear what is going on, why its being done that way, and make sure “gotchas” are noted for myself or whoever has to maintain the code. For example think of the notes required to make TI fixed point DSP code compile and actually work.

In this way I have consistent, clear, fully documented True Source level “code”. Of course the problem is that no compilers can use this “true source” as input. So I have to save this True Source code as the living document but also have to save a working “text only” version that will (not quite) work with the compiler of the moment. Since Word “text only” output isn't really text only I am forced to open and resave the text-only level version with something, like MS Wordpad, that will strip off the hidden residue from Word.

This “text only” file is never, ever edited, saved or used for any purpose other than compiler input. While to some programmers this is the real source code I treat it as the unfortunate required intermediate step required because of dumb compilers. The last unfortunate requirement of this system is that any tweaks / updates applied within the IDE must be entered into the True Source document and another cycle performed to ensure that the True Source really is.

Overall the above method has worked for years and even the very earliest code is readable to a brand new programmer and is actually a teaching tool to bring them quickly up to speed as to what the specific product is about, how it works, and how the code works – even if the relevant compiler and IDE have grossly changed along the way. Anyone can maintain any code with the only requirement being that they know enough about the specific target processor to be productive.

Your article brought up additional issues I had not though about. Now I will.

– Dwight Kitchin


Good article, but the agile methodologies that encourage refactoring stress that it goes hand in hand with a full automatic test suite, so that as ragged (but working) code is beautified it's functionality can still be assured.

– Paul Hills

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.