Open Source Software for High Reliability Applications: Is it safe? - Embedded.com

Open Source Software for High Reliability Applications: Is it safe?

Increasingly in this modern world, we rely on systems where an errorcould cause financial disaster, organisational chaos, or in the worstcase death. Software now plays a crucial role in these systems, some ofthe horrible start-up problems at Heathrow Terminal 5 have beenattributed to computer 'glitches,'  for example, while moderncommercial airliners depend on complex computer systems to operatesafely.

If we go to the necessary trouble and expense, we are actuallypretty good at creating near error-free software ” if the specificationis very clear. Not one life has been lost on a commercial airliner dueto a software error. That's not a bad record.

However, we do definitely have cases of people being killed bysoftware errors, most notably the patient killed by an excessive doseof radiation from a medical device, and a Japanese worker killed by aberserk robot. Both the latter cases could probably have beenprevented, by imposing more stringent controls on the relevant software.

Indeed from the point of view of preventing bugs, we have prettygood technology if we care to use it. In some cases, we can usemathematical 'formal' methods to demonstrate that the code iserror-free. Such an approach is being used for iFACTS, the new airtraffic-control system for the UK.

So perhaps we don't have too much to worry about and this articlemay end up being little more than a plea for education, so that thetechniques for generating error-free software (for example, the varioussafety standards used for avionics software) would be more widelyadopted.

However, the world around us has changed since September 11th, 2001,and the subsequent attacks on London and Madrid. Now it is notsufficient to assure ourselves that software is free of bugs; we alsohave to be sure that it is free from the possibility of cyber-attacks.

Any software that is critical is a potential target for attack. Thisincludes such examples as the software used to control nuclearreactors, power distribution grids, chemical factories, air trafficcontrol … the list goes on and on.

Safe and secure?
It is very much harder to deal with protecting software against suchattacks than making it error free. Consider for example the importanttool of testing. No amount of testing of software can convince us it issecure against future attack modes that have yet to be devised.

To think otherwise would be to take the attitude that since no onehad attacked the world trade centre for decades, it must have been safefrom future attacks. So how do we guarantee the security of software?

On an episode of the American television series 'Alias', Marshall,the CIA super-hacker is on a plane, clattering away on the keyboard ofhis laptop during takeoff preparations. When Sydney tells him he has toput his laptop away, he explains that he has hacked into the flightcontrol system to make sure the pilot has properly completed thetakeoff checklist.

Just how do we make sure that such a scenario remains an amusingHollywood fantasy and not a terrifying reality? In this article, wewill argue that one important ingredient is to adopt the phrase fromthe movie Hackers 'No More Secrets', and systematically eliminate thedependency on secrecy for critical systems and devices.

The disturbing fact is that the increasing use of embeddedcomputers, controlling all sorts of devices, is moving us in theopposite direction. Traditionally, a device like a vacuum cleaner couldbe examined by third parties and thoroughly evaluated.

Organisations like Which in the UK devote their energies to examining such devices. They testthem thoroughly, but importantly they also examine and dismantle thedevices to detect engineering defects, such as unsafe wiring. If theyfind a device unsafe it is rated as unacceptable and the public isprotected against the dangerous device.

But as soon as embedded computer systems are involved ” and they areindeed appearing on even lowly devices like vacuum cleaners ” we haveno such transparency. Cars, for example, are now full of computers andwithout access to the software details, there is no way to tell ifthese cars are 'Unsafe at Any Speed'.

Why is this software kept secret? Well the easy answer is thatnearly all software is kept secret as a matter of course. Rathersurprisingly, in both Europe and the USA, you can keep software secretand copyright it at the same time ” surprising because the fundamentalidea of copyright is to protect published works.

Companies naturally gravitate to maximum secrecy for their products.The arguments for protecting proprietary investment and IntellectualProperty Rights seem convincing. The trouble is that the resultingsecrecy all too often hides shoddy design and serious errors thatrender the software prone to attack.

Can we afford such secrecy? I would argue that in this day and age,the answer must be no. First of all, there is no such thing as asecret, there are only things that are known by just a few people. Ifthe only people with access to the knowledge is a small number ofpeople at the company producing the software and there are some badguys willing to spend whatever it takes to discover these secrets, dowe feel safe?

At a recent hacker's convention, there was a competition to break aWindows, Mac, or Linux operating system using a new technique, hithertounknown. The Mac was the first to be successfully attacked, in undertwo minutes.

Freely licensed open source software
In recent years, a significant trend has been far greater productionand use of FLOSS (Freely LicensedOpen Source Software). Such software has two important characteristics;firstly it is freely licensed, so anyone can copy it, modify it, andredistribute it. Secondly, the sources are openly available, whichmeans it can be thoroughly examined and any problems that are found canbe openly discussed and fixed.

What we need is to establish the tradition that the use of FLOSS isat least desirable, and perhaps even mandatory for all criticalsoftware. Sure, this makes things a little easier for the bad guys, butthey were willing to do whatever it takes to break the secrecy anyway.Importantly what this does is to make it possible for the worldwidecommunity of good guys to help ensure that the software is in goodshape from a security point of view.

At the very least, we can assure ourselves that the software isproduced in an appropriate best-available-technology manner. If weopened up a television set and saw a huge tangle of improperlyinsulated wires, we would deem the manufacturing defective. Theembedded software in many machines is in much worse state than thistangle of wires, but is hidden from view.

There are two aspects involved in the use of FLOSS in connectionwith security-critical software. First we gain considerably by the useof FLOSS tools in the building and construction of such software. Oneway that software can be subverted is, for example, to use a compilerthat has been subverted in a nefarious manner. For example, suppose ourcompiler is set up so that it looks for a statement like:

if Password = Stored_Value then and converts it to if Password = Stored_Value or else Password = “Robert Dewar”

then we are in big trouble, which we can't even detect by closeexamination of the application sources, since there is no trace there.

Dennis Ritchie, father of the C language and a key influenceon the development of Unix, warned of such subversionin his famous Turing lecture 'Reflections on Trusting Trust'. It is fareasier to subvert proprietary software in this manner than FLOSS.

After all, early versions of Microsoft's Excel program contained afully featured flight simulator hidden from view. If you can hide aflight simulator, you can easily hide a little security 'glitch' likethe one described above. This would be far harder to do with a compilerwhose sources are very widely examined.

The second aspect is to make the application code itself FLOSS,allowing the wider community to examine it. Now Which? magazine couldemploy experts to look at the software inside the vacuum cleaner aspart of their careful evaluation, and reject devices with unsafesoftware.

The arguments above seem easily convincing from the point of view ofthe public, so what's the objection? The problem is that companies arededicated to the idea that they must protect proprietary software.

An extreme example of this is Sequoia Voting Systems, which refusesto let anyone examine the software inside its machines on the groundsthat it is proprietary, with threats of lawsuits against anyone tryingto carry out such examinations.

Here we have a case where one company is putting its proprietaryrights ahead of essential confidence in our democratic systems. Thesituation with voting machines is perhaps even more critical in Europe,where in some cases, e.g. for the European Parliament elections,complex voting systems are used, where we totally depend on complexcomputer systems to implement these systems accurately and withoutpossibility of subversion.

What's to be done? We do indeed have to ensure that the proprietaryrights of companies are sufficiently protected that there is sufficientincentive to produce the innovation we desire, but this cannot be doneat the expense of endangering the public through insecure software.

In most cases, the real inventions are at the hardware level, wheretraditional protection, such as patents (which require full disclosure)operate effectively. Perhaps innovative forms of copyright protectioncan provide adequate protection for software, though in most cases Isuspect that such protection is not really needed.

Suppose Boeing were forced to disclose the software controlling itsnew 787 'Dreamliner'. Would this suddenly give Airbus a huge advantage?Most likely not, as you can't just lift the 787 avionics and drop theminto an Airbus 350.

Yes, probably Airbus could learn useful things by studying thesoftware, just as they learn useful things by studying the hardware anddesign of the Boeing planes. If everyone were forced to disclose theirsoftware, then this kind of cross-learning would actually benefitcompetition and innovation.

We can't just hum along on our current path here. The world is amore and more dangerous place and the increasing use of secret softwarethat is poorly designed and vulnerable is increasing that danger. Wehave to find ways of addressing this danger, and more openness is a keyrequirement in this endeavour.

Robert Dewar is President and CEO of AdaCore

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.