If you don't have enough to worry about, Eric Shufro sent a link to aScientific American articleabout security threats embedded in DoD software written by overseascontractors. When a major defense system comprises millions of lines ofcode, how can one ensure that a bad guy hasn't slipped in a little bitof nastiness or a back door? I suspect a system like the missiledefense shield is especially problematic as it's so hard to test.
The article doesn't mention the use of code inspections to look forvulnerabilities. But inspections after delivery, designed just to lookfor security issues, are terribly expensive and not 100% effective.
Othersworry that PCs produced overseas may carry bugs. Not software defects,but hardware and/or software that monitors data streams to look forsensitive information. That may be a bit hysterical since governmentcomputers for classified use aren't connected to the Internet. A bugmight ferret out some interesting nuggets, but has no way to send itback to the other side.
Eric asked an interesting related question: virus writers loveinjecting their bits of slime onto our machines to, mostly, hijack thecomputers to spew spam. But what if an evil person or governmentdecided to infect our development tools? It wouldn't be too hard toreplace stdlib or other library files. Now a runtime routine is sick,perhaps helping their evil overlords send spam or maybe something muchmore sinister. If the code is written to switch modes at some latertime the problem might not be found in testing.
To do this, a virus would have to execute some code on the PC tostart changing libraries. Presumably the antivirus forces would quicklyidentify the new worm and issue updates to their software to find andquarantine these things. An attack on Visual Studio or other standardPC development tool would be quickly found and removed, as so manypeople use these products.
But things might be less sanguine in the embedded space. An attackon some 68HC12 compiler, used by a relatively small number ofdevelopers, could lurk for a very long time. And be very, very hard tofind if it's purposely intermittent.
Some safety-critical standards require verified tools. Update thecompiler and someone must re-verify it. However, if a virussurreptitiously tampers with a verified bit of software, will thatattack slip through unnoticed?
Perhaps it's time we CRC the tools in the build cycle.
What do you think? Should we layer some level of defense around ourdevelopment tools?
Jack G. Ganssle is a lecturer and consultant on embeddeddevelopment issues. He conducts seminars on embedded systems and helpscompanies with their embedded challenges. Contact him at . His website is .
ok…I think we are being a wee-tad paranoid here. However, I do believe your ideas on embedded security, in general, merit some serious consideration and debate.
The most likely scenario, … breeches in security, and undermining of systems
Us embedded Developers!
More than a few times we resort to creating trapdoors, loopholes, and backdoors. Not to mention unsecured TFTP, Telnet and other such stuff. We do this all in the name of visibility, diagnostics and determing 'the problem' when units are 'acting up' in final test, or 'worse' out in the field!
– Ken Wada
Sr Embedded Systems Consultant
Aurium Technologies Inc
San Jose, CA
Simply checking in the tools you use to build the project into your version control system will get around any attack that arrives after the tools were bought. When the software is released, everything is checked out of the VCS first.
We always check in the tool-chain and anything else required to rebuild the project, mainly to ensure we can always rebuild it even if the CDs have been lost years ago.
– Paul Hills
In his Turing Award speech, Ken Thompson describes such an attack in Unix using a very clever hack.
A transcript of this speech is available online here: “Reflections on Trusting Trust”, www.acm.org/classics/sep95/
– Corentin Plouet