The Grid Comes to Embedded - Embedded.com

The Grid Comes to Embedded

I spent two days at the Green Hills Technical Summit in Santa Barbara, CA last week, and learned a lot. For example, Oprah reportedly has a $40m house in Santa Barbara, which, looking at the town's unbelievable opulence, probably bought her a nice shack.

On the embedded front, Green Hills introduced version 5 of their “Multi” development environment. Of all of the new release's features, the one that most fascinated me is distributed builds, though their addition of a static analyzer (“DoubleCheck”) is also compelling.

The company realized that compilations eat valuable programmer time. Press “build” and then, well, wait. Usually the build is takes just enough time to frustrate but not enough to go off and do other, useful, work. So the office is full of developers staring at the ceiling, waiting, and burning expensive payroll dollars.

That hardly seems productive

Yet the computers in any engineering lab are idle most of the time. A 3 GHz CPU, quarter terabyte of disk and a gig or two of RAM are, well, doing little more than downloading the latest OS patches, and waiting for the user to press a key.

Multi's distributed build identifies machines on the network that are more or less idle and parcels compilation tasks to these processors. Each computer completely compiles a single module and returns the object file to the developer's machine. This is rather like SETI@home, LHC@home, and other grid networks where users offer up their computer's free time for the benefit of others. Of course, Green Hills' version does require that each machine has a Multi license.

The company claims typical reductions in compilation times of 30 to 80%. Yet when I watched a side-by-side Linux recompilation on single- and four-node clusters it looked like the latter was about 3 times faster than using a single machine.

But how often does one recompile Linux?

During normal development we typically do some testing, uncover a problem, and then change one or two modules. There's probably little advantage to distributed compiles. But in maintenance things change. According to research performed by Eugene Beuchele of Encirq, on average 43% of the code changes from version 1.0 to 2.0. A lot of modules are affected, so speeding up the compilation could have some real benefits.

Waiting for a tool wastes time. We've pretty much maxed out benefits from cranking up workstation clock rates, so it makes sense to spread the compute burden over many CPUs. Intel's multicore strategy appeals but so does the idea of hijacking idle machines.

Jack G. Ganssle is a lecturer and consultant on embedded development issues. He conducts seminars on embedded systems and helps companies with their embedded challenges. Contact him at . His website is .


This isn't anything new. In terms of distributed compilations, distcc has been around for quite some time and is free. You are limited to gcc, but nowadays that isn't much of a limitation than a feature.

Website can be found here: http://distcc.samba.org/

– Brian Padalino


Note one risk of distributed compilation; you might get indeterministic behavior. When you compile on a single machine the order of compilation and thus the time stamps on files is typically the same each time. So that the files will be linked in the same order. But in a distributed compile, the time stamps will be more variable, leading to a different link order.

And for most linkers out there, this can have subtle but significant effects. A friend of mine was bitten by this. About once in fifty, his program got a different timing behavior because of small differences in the compilation and linking thanks to distributed compilation. And this caused a lock-up at start on a multiprocessor machine. Not a nice one to debug…

Hope Green Hills have done this “right”, but that is very very difficult to get right in practice. A completely deterministic compiler and linker across multiple host types is quite hard to do right.

– Jakob Engblom


Hmm, so for the money I would have spent on the extra Multi licences, could I have just spent that on upgrading my PC to one with a faster CPU and more memory, and that would speed up everything, not just compiles?

– Niall Murphy


Nice idea. However, I believe most people will stick to their independant machines, with networked revision control. I can see this technology benefiting the large products. The one where there are 100's of modules. Usually, in projects and teams such as these, you will have a small team that maintains the version control, release, bug-tracking, quality and of course the build quality.

If you have resources like this, (and money of course), then one can definitely afford the extra costs incurred via distributed builds. (extra licenses, extra machines etc . I do not see marketing or sales being too happy with the engineers 'over there' “stealing their CPU MIPS”!

– Ken Wada


Jack, Jack, Jack… As EVERYONE knows, Oprah's house is in Montecito, not Santa Barbara. Folks in Montecito look at Santa Barbara as the poor folks up the road.

– Mark Edards


Jack, you neglected the positive side effect of waiting a minute or two for the static analyzer, compile and link. The moment of reflection is often useful. On more than one occasion I've realized I needed to investigate an overlooked assumption as I was waiting the short time for the compile to finish.

And if you need to compile more quickly than every few minutes maybe you are making too many changes with too little reflection on what you are actually doing. A few dozen compiles can easily save you ten minutes with a pen and paper.

– Robert Adsett


Jack, a great topic and some interesting responses, as well. As Robert, above, wrote about positive effects of the reflection, here is one from my mind & keyboard as I am backing up my source code. However, I am not sure the reflection itself is a positive one – it is an experiential one.

What if there were only one compile ” version 1.0 – and then going to production (this applies to silicon synthesis, as well)? What would we need to change ” if we can/could – from the current modus operandi?

The oversimplified model which we deploy is: theory->test->evaluation of result. While money is still in a bank, repeat, until success is achieved.

How about injecting some honesty with a full disclosure, would that help to describe and understand our predicaments?

Take a brain surgeon and a commercial airline pilot for the following comparison (one of Jack's past ideas). What would you think about the doctor who would welcome you to his office with the words “You know, I am still learning, I have not done this type surgery without some snafu ” some of my previous patients cannot talk, walk or swallow (bad compiles). Hopefully, you will be fine.” Or while you are getting comfortable on a airplane-seat the announcement comes “Ladies and gentlemen, this is your captain speaking, I am still learning to fly and have not landed a plane without some glitch (bad compiles), let's hope I will do it this time right. Hopefully, this trip will be a safe one.”

If we accept the fact that everything we do is learning, then it is [much] easier to accept the numerous iterations of compiler's output. After a denial comes an acceptance ” so they say – but what if your code is bug-free,but it is the silicon that your code runs on is not: pre-fetch buffer that doesn't do either, a content of the link register is being off-by-two or JTAG ID that it is not.

A bad compile.

Statistically, however, majority of things that we do – do work. Otherwise we would not be here.

On a day that we'll compile with a quantum computer(s) networked to other(s) in different galaxies the result will appear faster than the racing thought in one's head about the forgotten option switch '-fupc-threads-N'.

< a="href" "http://en.wikipedia.org/wiki/d%27oh%21"=""> “'oh!”

Meantime, I am quite busy thinking about what to compile. How about you?

Wishing you and your readers many productive compiles in the New Year of 2007 !

Sincerely and with a smile

– Roger Lynx

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.