- “It depends on what the meaning of the word 'is' is.”
- “It depends on what the meaning of the word 'is' is when you say 'it's done'.”
–Wise managers to innumerable engineering teams
- “Where is it?” Wolfgang Amadeus Mozart's patron asks. Mozart points to his head. “Here, it's up here,” he explained, “Now it's all just scribbling. Scribbling and bibbling and bibbling and scribbling.”
–scene from the movie Amadeus .
The smartest and most productive developer that ever worked for me had his own definition of “it is done.” Once he understood every nuance of the problem, once he'd structured the solution in his head, the code was “done.” After that it was simply taking dictation. Like Mozart he composed the entire intricate structure of the code in his head and after that merely transcribed his thoughts into the computer. Also like Mozart, or at least the Mozart we know from the wonderful movie Amadeus , his transcription was nearly error-free.
But like the movie-Mozart who spent many long nights feverishly writing his scores while hiding from angry customers, that developer, too, always grossly underestimated the effort needed to go from a brain full of design to a working implementation.
Nothing is more infuriating to a manager than hearing that a system or component is “done,” and then later finding out that “done” does not mean completely working and tested. A wise manager realizes “done” means different things to different developers, so develops a questioning strategy to elicit the system's true status.
“Done” is a particularly problematic concept with traditional software engineering approaches. In most of these the bulk of the testing, and certainly the integration phase, is deferred till late in the project. When management wants to lop off a few weeks from the schedule, the end-phase–the testing–gets whacked. So no one really knows if the system works correctly.
Every agile methodology, though, stresses testing uber alles . No change, no matter how small, has any significance till the test suite runs. Any change, no matter how big, is considered “safe” once the test suite validates it. “Done” means the code is written and tested . That profoundly alters the landscape of software development. It holds every modification, or every function, method, subsystem or complete system to the highest of standards: it works. “Done” indeed means done, without any Clintonian obfuscation.
Unfortunately, the automatic testing required by agile methods is tough in the embedded world. A user might have to press buttons or observe a display. Systems that interact with the real world might actuate something, yet have no feedback to enable an automatic check to ensure the right thing happened. Some developers build test harnesses to simulate the I/O. Frameworks like Catsrunner (www.agilerules.com/projects/catsrunner/index.phtml) help. Some commercial companies like Virtutech, offer solutions. But testing remains a thorny problem for firmware. I think the next great breakthrough in software engineering will be some product or approach that makes automated testing more tractable.
Then we'll know that done means done.
Jack G. Ganssle is a lecturer and consultant on embedded development issues. He conducts seminars on embedded systems and helps companies with their embedded challenges. Contact him at . His website is .
Again a very pragmatic problem!
One of my team mates in multimedia development comes around and tells me that JPEG Decoding is done and working. But when I ask for the test report – all that is there for me to see is only a a single image that is tested with! How in the world can I release such a product tested for a single test case?
This is the situation that frustrates me, my manager and my customer most in every project.
Now, we are demanding a test report by each team and getting it reviewed by one of the peers for missing test cases. All testing is put an effort to automate. We have a subset picked up as “acceptance” test cases that MUST be fully automated (some scenrario with simulations always are inevitable – but at least, we can be more confident on working code and breaking code).
My guess is that at least 60 to 65% of testing can be automated in most systems I worked on.
During integration is another problem – since most of the modules seem to be in a hurry to release their code. So “DONE” so many times has meant that the coding is done and it can be integrated (without a bit of unit testing!!)
Getting test reports done seems tough since engineers usually seem to have an aversion to make such reports 🙁
I am looking for tools to make this process interesting for engineers now… Is there another way about?
– Saravanan T S
Testability is certainly the trick, isn't it? It doesn't take much of a project before the range of possible input behaviors exceeds all hope of comprehensive test cases.
A year ago I wrote a module for Windows DirectShow to extract some information from a DV camcorder's raw data stream as it went by. This was just proof-of-concept work, it worked with the camcorder I had, and I knew the task was possible. (One definition of “done.”) This year I started product development on a project that needed this. I went back and reworked the module to admit the wider range of possible input data I cared about according to the IEC 61834 DV specs, so that it worked with all of the camcorders I could lay my hands on. (A much better definition of “done.”)
Just two days later somebody tried it on still another camcorder and it didn't work: the camcorder was providing data which did not quite match the spec, so it failed my validation code. Now: what isn't “done” here? I could argue that this camcorder wasn't done because it didn't match the spec. But that's not helpful: my customer might have one of these. Naturally I changed my validation code to allow for the observed violation: that is, I expanded my test suite to include a previously invalid input. Is my code “done” now? Sure… until we find the next unexpected input.
Software systems rarely operate in a vacuum. There are usually other pieces beyond our control with which we have to work, whether it be data flows or analog inputs or user expectations. Testing may be done when our code handles everything we expect… but expectations are so often trumped by harsh reality.
– Mark Bereit
In my experience, the definition of “done” is established by the manager. Managers are constantly looking at the next project. Therefore, for them, “done” means that they've assigned the work. This puts a huge burden on the engineer to constantly reminder the manager that a great deal of work is involved between being assigned work and being “done”.
– Michael Turner
Funny analogy! Clinton's issue was the difference between “is” and “was”, and to carry this on a developer will often say “it's done” before it's been tested, which often reveals bugs and short-comings resulting in it not being “done” anymore. It gets to a state of “was done” (until someone broke it by testing it!).
Another thing, these days we can't get away with saying any software/firmware job is done until all the documentation is complete and signed-off, including reviews and minutes. The actual coding is rather insignificant in this process.
– John Davey
I have an old favorite theory that if a piece of software is not being maintained, it is not being used. Therefore, under this premise, “done” means that the code is obsolete.
– Dennis Ruffer
I once inherited a “done” project from a SW guy that wanted to move onto a new program. He had convinced management that all that was left was some documentation and release effort. Imagine the surprise of the program manager when I told him that “done” simply meant the code compiled–but had not been tested and my 1st day I had unconvered a Pandora's Box of run time errors…
– paul calvert
With apologies to the Oracle's child prodigy in the original Matrix, “There is no done”.
Similar to manufacturing's Learning Curve, continuous improvements to the hardware, software, & firmware occur until product EOL. That's why I won't buy a first-year car, or a gadget that can't be flashed.
So, the real problem is how to decide when it's good enough. “Third Level If” (TLI) seems like a reasonable compromise between scheduling, finance, liability, warranty risks, etc. TLI means the defect requires at least 3 things to occur in particular order for the defect to manifest : e.g., if you twist this dial CW, and if the tide is rising, and if the database has not been updated, then your canoe tilts 15 degrees. Obviously you need more levels for life critical applications, like to keep a cruise liner from tilting.
Tracking the defects & categorizing the IF levels helps make the Good Enough decision.
– J Slama
You are never “Done” with any project. You better start looking for another job if that happens. Every product/service that needs to be built will need new features, will have to solve new problems, will generate more work.
When an engineer says “It's done”, most of the time it just means that a milestone has been reached. If a manager does not take time to understand what the “done” means, we need to better educate managers to help him/her understand this. He/She nneds to create a framework where they can measure somebody's work and deliverables and add a known correction factor if something goes wrong.
You are never done!!
– AB CD
Since you talked about agile development, another “agile” methodology that has picked up some steam lately is “Continuous Integration.” See:
Basically code is integrated and testscripts are ran continuously. This has become a practice in the Java world. Can we, embedded developers, do it?
– Mohamad Yusri
Depends on who did it. I do not like testing and debugging. Therefore I write code that needs little testing and debugging. To some this may sound like heresey. Unlike Mozart, my scribbling and bibbling are concurrent with arranging and rearranging the problem in my head. But when the clean simple solution reveals itself, the projects is done. Testing and debugging will not take long. Absent an understandable solution, testing and debugging will never end. Though occasionally the real world intervenes in the form of hardware and software that does not behave as documented. Then there is testing and debugging to figure out how it really works as opposed to how it claims to work.
– David Lititz
We use an in-house developed testing framework. It is a python framework that can simulate all I/O in the system. We connect this to a Windows-build of the software, with drivers replaced with connections to the python framework. There are still issues with timing, since windows threads are used. It would be a lot better if we could run the target software in a virtual machine..
– Pr Larsson