An engineer recently sent me a collection of emails between himself and the support desk of a large test equipment provider. He had identified a bug in his scope’s firmware and wanted a fix, or at least a workaround, though the latter seemed unlikely.
The dialog proceeded as you’d expect: he clearly described the problem, and the support tech told him to check the instrument’s settings. That iterated a couple of times with the engineer asking – pleading, really – for the technician to at the very least attempt to duplicate the error condition. Eventually the admission came: there was indeed a bug that made one bit of functionality rather useless. The fix was to toss the very expensive unit and buy the new model. At the advanced age of four years the scope was obsolete and essentially unsupported.
I suspect the company lost a customer.
But this story is part of a larger issue, one engendered by software. How long should a tool be supported? Forever? Or, just until next year’s model hits the streets?
In some cases companies have taken a pretty reasonable approach. Microsoft (dare I say something nice about them?) supported XP for many years, and gave a clear and long warning when the OS would enter the twilight of life. And, that’s a relatively inexpensive product (other than for enterprises which may buy thousands of copies). Further, consumers’ expectation of longevity for these sorts of products is relatively short. Other consumer products seem to have useful lives measured in microseconds so long-term support is probably not important. Mobile phones are practically fashion items, discarded as soon another version appears. (Though my wife’s iPhone 3G is about four years old, and she has no interest in an upgrade. Happily on a dinghy ride this week it took a wave, so there’s a Siri in her future ).
But a scope or similar tool, which might cost tens of thousands, could reasonably be expected to perform for a couple of decades. I had a Tek 545 thirty years after it was discontinued and it continued to perform well for low-speed applications. But that device, which was comprised of about 100 vacuum tubes and not a single bit of digital, was simple enough that feature perfection was expected and not terribly hard to achieve. A modern scope that boots a desktop OS is comprised of millions of imperfect lines of code. Features are very complex and interact in ways that are very difficult to test exhaustively. Probably most of these units are shipped with at least a few quirks.
So how long should the vendor be on the hook for fixes? Forever, and “till the check clears” are both unreasonable ends of the spectrum.
I think that with the long expected lives of these sorts of devices, coupled with their chilling complexity and high costs, a vendor could gain substantial competitive advantage by offering bug fixes for very long periods of time. Such a policy can be expensive, which means wise managers will work even harder to insure that version 1.0 functions properly.
Which will make their customers even happier.
Jack G. Ganssle is a lecturer and consultant on embedded development issues. He conducts seminars on embedded systems and helps companies with their embedded challenges. Contact him at . His website is .