Tool upgrades - Embedded.com

Tool upgrades

Last week’s rant about tools that don’t conform to an OS’s standard elicited a number of interesting replies. Ken Wada wondered how a company should deal with new versions of tools, especially compilers.

I haven’t a clue.

You’re halfway through a project, things are going well, when with great fanfare the vendor releases version 5.0, loaded with bells and whistles that you probably don’t need. But the risk of working with an obsolete or unsupported product tempts you to perform the upgrade.

What’s the right course of action?

Suppose the new release is version 4.01 to correct a couple of bugs reported by other users. The problems haven’t affected your project as yet. To upgrade or not to upgrade… that is the question.

Or the project has been released and is in maintenance. Defect rates are low; the customers are thrilled with the product. But the compiler has become obsolete and unsupported. Is it time take the attendant risks of doing an upgrade?

We know several things for sure. A change to a compiler’s code generator will generally cause it to emit different binary. The compiled code might be tighter, faster and all around better. But it’s different. And that’s a problem.

When the production line uses checksums to label ROMs or control software versions, a change in binary at the very least means some sort of documentation change to support the production line. Often much more serious ramifications result.

Different generated code may affect the system’s real time performance. A lot of applications use pre-emptive multitasking, which has both benefits and risks. One downside is that there’s no way to guarantee a deterministic response to inputs or interaction between tasks. Different code may alter the system’s determinism, perhaps to the extent of breaking the code.

A new compiler may generate slightly slower or more bloated code. The runtime package will likely change in some unpredictable manner. I’m just back from visiting a company whose product consumes more than 99% of all CPU cycles, and I know of several others where 99% utilization is required since any excess capacity means hardware costs are too high. A compiler upgrade may push the ROM usage too high or performance too low.

Some safety critical systems must be requalified when the tools change, a hideously-expensive process.

Most development teams I know blindly upgrade, offering a prayer or three that nothing substantive will change. And they usually have no problem.

Others refuse to change midstream, using the older product unless trapped by a problem that lacks a workaround. They often archive the old tools for future maintenance.

Though I’m addressing compilers directly, the same peril exists with any package that’s bundled with your code, like protocol stacks, math libraries and the like.

What’s your take? How do you manage changes in tools and included packages?

Jack G. Ganssle is a lecturer and consultant on embedded development issues. He conducts seminars on embedded systems and helps companies with their embedded challenges. Contact him at . His website is .


Tool upgrades are less risky when using Test Driven Development. When usingTDD the programmer writes automated tests that demonstrate how the code issupposed to work. TDD results in very high test coverage. For embeddeddevelopment, the TDD programmer is also likely to run more than onecompiler, one for the development systems execution and one for the targetexecution. If either compiler changes, we have a test bed to see of thefeatures (and bugs) that I depend on still work the same. We recompile andretest. We now have data to guide our decision. Do we use the new or oldversion. How long should we budget for fixing the compatibility problems.

This also applies to using third party code. Another practice used whendoing TDD is the notion of a learning test. Instead of just reading thedocumentation and then integrating the third party code into application wewould first write some automated unit tests to see how the third party codeworks. These tests exercise the code the way we expect to use it. Once wehave finished the experiments we keep the tests around so I can run themagainst the next release.

Once we know how the third party code works we can then see aboutintegrating it with our applications. Often the learning tests result insome useful code being developed. In this case we usually end upencapsulating the third party code in another module. So writing the testsdid not cost much if anything. They eliminated debug time, and made usfocus on what is needed from the library. When the third party vendorreleases the next version the tests pay for themselves again in pointingright to execution differences.

One of our clients switched processors in the middle of an embedded project.The test helped reduce the risk of that change. The project stayed ontrack.

In a prior version of gcc the unit test harness complained of a memory leak.I narrowed it down the leak to a static that was being lazily initialized inthe stringstream class from the standard library. The most current versionof gcc does not exhibit the leak. My unit tests kept me posted of subtlechanges.

The tests are no silver bullet, but they can help a lot in reducing the riskof accepting new releases. If you want to know more about TDD for embeddedtake a look at this paperhttp://www.objectmentor.com/resources/articles/EmbeddedTddCycle-v1.0.pdf

– James W Grenning


Unless the bug fix to the compilier is absolutely required, we always release with whatever version we've tested with, which is not always the latest version. Of course, if you read the EULA that comes with the software, this is a violation of the EULA, and probably should put us all in prison for something. There is something very ungood in a world where I can buy Visual C# for $100 and do almost anything, but XYZ's 8051 compiler is $2,000 and if its engineers or marketeers have a brain fart, I have to recompile everything or return the disks. Are you listening Keil, IAR, and the rest of you?

– Brad Stevens
Director of Engineering
Accurate Controls


We wouldn't dream of changing the compiler mid-stream through a project–that's suicidal. What's the point anayway unless there's a bug in it that's stopping you continuing? How often do you phone up your compiler supplier requiring technical support–I've never done it in 15 years of embedded development.

The only time we've upgraded compiler is when a recent project was on course for disaster–it was already using 56k ROM in a 60k processor, and there were more features to add. That change was inevitable, but the software validation test had not yet been run so it wasn't disastrous.

We check all the compilers, tools, libraries, manuals, in fact EVERYTHING required to rebuild the product into the Version Control System, so maintenance is no problem.

However, we're now moving over to an agile methodology which encourages full automatic testing of the code, so the impact of a compiler change would be vastly minimised in this environment.

– Paul Hills
Software Manager
Landis+Gyr Ltd
United Kingdom


We do a thorough DVT with regression test on our product to verify new compilers. Even then, certain products (or product families) will continue to use “outdated” compilers because the cost of the full DVT/regression is prohibitive.

Our tools fall under the same Software CM processes as our product.

There might be better ways to handle it, but this one is the way I deal with it. It's really tough sometimes because I do a lot of beta testing for compiler and library vendors.

– Andy Kunz
Sr. Firmware Engineer
Transistor Devices
East Hanover, NJ


I don't upgrade unless I am starting a new project or I have to (bug with no workaround). I archive old tools with the project and I use the original tools for maintenance. If we do a new version of an old product (not just an upgrade or fix) then we will also upgrade to the current production tools–not necessarily the latest.

– Stan Katz
Hopkinton, MA


Selecting a tool vendor that is reputable and has decent technical support (i.e. engineers manning the phones, not salespeople) is essential, and the most important factor in my opinion. Having detailed notes on new releases can help you evaluate whether or not to take the update. Lastly, if you're in a situation where the tool vendor will no longer support the release you are using, you can try to negotiate access to source code. I would only recommend this as a last resort.

– Rick Walsh
Principal Engineer
Harris
Mount Laurel, NJ


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.