Quick. Which of these events really happened:
a) Computer worm crashessafety system in Ohio nuclear plant.
b) Virus halts train servicein 23 states.
c) Young recluse crackscomputers that control California dams.
d) Hacker uses laptop torelease 260,000 gallons of raw sewage.
The answer, sad to say, is all of the above. These attacks, andthousands like them, demonstrate that building a secure perimeteraround our computer systems is no longer enough. Firewalls, intrusiondetection software, and anti-virus programs are all important, but nomatter how robust a perimeter they may create, malicious hackers canand will break through.
What we really need is a new approach to designing the systems wewant to protect, an approach that can make those systems inherentlytamper resistant and capable of surviving assaults. Otherwise, we aresimply erecting concrete barriers around a house of cards.
The need for such an approach has been made all the more urgent by amajor shift in cyber crime. Yesterday, hackers cracked systems forthrills and notoriety; today, they do it for profit. It's become afull-time job, staffed by dedicated professionals. If a hacker standsto make money by accessing your data — or by threatening to launch adenial-of-service attack on your system if you don't pay an extortionfee — then you're a target.
Worse, these professionals are targeting not only corporate ITservers, but also control and supervisory systems — systems that keepfactories running, power flowing, and trains from derailing. An attackon a corporate server might be costly, but an attack on a life-criticalembedded control system can be catastrophic. Consequently, such systemsare considered a prime target for cyber extortionists.
Truth be told, the principles of creating a design that isinherently survivable and tamper resistant aren't all that new. Infact, many of them were established as far back as the 1970s, whenresearchers such as Saltzer & Schroeder published seminal papers onthe topic.
The surprise is how much — and how long — the software industry hasignored them. This omission goes a long way toward explaining why ourservers and desktops are so vulnerable to malicious exploits. It alsoexplains why many embedded systems are equally at risk.
Consider the key principle of least privilege, which states that asoftware component should only have the privileges it needs to performa given task, and nothing more. If a component needs to, say, readdata, but has no need to modify that data, then it shouldn't be grantedwrite privileges, either explicitly or implicitly. Otherwise, thatcomponent could serve as a leverage point for a malicious exploit or asoftware bug.
As it turns out, the majority of operating systems today are inserious violation of this principle. For instance, in a monolithickernel such as Windows or Linux, device drivers, file systems, andprotocol stacks all run in the kernel's memory address space, at thehighest privilege level. Each of these services can, in effect, doanything it wants.
Consequently, a single programming error or piece of malicious codein any of these components can compromise the reliability and securityof the entire system. Imagine a building where a crack in a singlebrick can bring down the entire structure, and you've got the idea.
In response, many embedded system designers are adopting a moremodular OS architecture, where drivers, protocol stacks, and othersystem services run outside of the kernel as user-space processes.
This “microkernel” approach not only allows developers to enforcethe principle of least privilege on system services, but can alsoresult in a tamper-resistant kernel that hackers cannot bend or modify.
This approach can also satisfy other requirements of a secure,survivable system, such as fault tolerance (the system will operatecorrectly even if a driver faults) and rollback (the system will undothe effects of an unwanted operation while preserving its integrity).
By extending the microkernel with secure partitioning, applicationsnow have guaranteed access to computing resources, in virtually anyscenario. The need for such guarantees is especially urgent in theembedded market. Keeping pace with evolving technologies requires theability to download and run new software throughout an embeddedproduct's lifecycle — in-car telematics and infotainment systems beingan example.
In some cases, this new software may be untrusted, an added risk. Toaddress such concerns, a system must guarantee that existing softwaretasks always have the resources (e.g. CPU cycles) they need, even if anuntrusted application or DoS attack attempts to monopolize the CPU.Properly implemented, resource partitioning can enforce thoseguarantees, without any need for software recoding or extra hardware.
None of the scenarios I mentioned earlier caused serious harm — withthe possible (and pungent) exception of the sewage incident. They dodemonstrate, however, the phenomenal trust we place in complex,software-controlled systems, and how vulnerable we become if thosesystems are compromised. As software designers, developers, andmanagers, our task, then, is to create systems that are inherentlytrustworthy.
But trustworthiness isn't simply an add-on layer. It has to be builtfrom the ground up. Start with a software architecture that embracesfundamental principles of security — such as separation of privilege,fail-safe defaults, complete mediation, and economy of mechanism — andyou've got a major head start. Fail to do so, and you fight a costly,uphill battle. For proof, consider the endless parade of patches neededto secure our desktops.
When it comes to building secure, survivable systems, what you startwith determines what you end up with. Fortunately, the underlyingprinciples we need to embrace aren't unproven or obscure, but simplygood, well-accepted programming practices. The groundwork has alreadybeen laid; let the next generation of innovative — and secure — systemsbegin.
Dan Dodge is CEO, QNX Software Systems.