A proactive strategy for eliminating embedded system software vulnerabilities: Part 2 - Embedded.com

A proactive strategy for eliminating embedded system software vulnerabilities: Part 2

There are many differences between some of today’s embedded systems andthose of days gone by, and some of the underlying changes havesignificant unexpected consequences. Perhaps the greatest change isthat increasingly modern embedded systems are based not on closedproprietary platforms but on commodity OS platforms. As a consequence,such embedded systems share all the security vulnerabilities of generalpurpose systems built on the same OS.

In this second of a three part series, we focus on an alternativeapproach to today’s open systems based approach to developing opensystems. In the first part we explored the reasons for a move to theopen systems approach, together with some potentially very undesirableconsequences. In a nutshell, if you develop your embedded systemproduct on a commodity OS like Windows or Linux, your product willshare the security and reliability risks of that commodity OS, as wellas some of the vexing support issues that go along with the OS vendor’sfrequent patches.

For some vendors, these consequences can be a big surprise,especially from the viewpoint of not-so-long-ago when there were validassumptions such as embedded OS obscurity, closed systems, and isolatedfriendly network environments. Not any longer!

A Better Alternative
A better alternative would be to develop on an open system platform butdeploy as a closed system (see Figure 1 below). The general idea is toreap the benefits of open systems for development, but avoid theconsequences of open systems in deployment.

For a typical general-purpose system, such an open-to-closed-systemapproach might not work; by definition these systems must remain opento support a changing or growing set of application and infrastructuresoftware products. However, “develop open, deploy closed” could workquite well for embedded systems whose mission and function is intendedto remain static after deployment.

In such cases, manufacturers take advantage of all the benefits ofdeveloping and deploying embedded systems on an open platform, whileensuring that security vulnerabilities and destabilizing factors arepreemptively dealt with in the factory.

Figure1 – Open-to-closed transition from development to deployment

How would such an approach work to “close” the system at deployment?The overall requirement is a capability to build an embedded system sothat even though it is based on a general-purpose open platform, andeven though security or reliability vulnerabilities may be detectedafter it ships, those vulnerabilities cannot be exploited to harm theembedded system product in the field.

While no such satisfactory solution has been available so far, thereare some ideas that have been around for a while which have shownpromising research results and have captured recent press activity.Such alternative approaches are based on “system diversity” techniquesfor making each embedded system unique, and can represent a no or lowmaintenance and no or low overhead software solution.

The best-of-both-worlds scenario will look something like this: themanufacturer develops an embedded system on an open standards-basedgeneral-purpose platform. Just before shipping the embedded system, themanufacturer “freezes” the system so that it cannot be modified and nonew code can run. In the field, the embedded system performs asintended despite newly discovered platform vulnerabilities.

The general idea behind this is as follows: Today, when a “bad guy”discovers the specifics of a vulnerability on a platform, not only willhe know exactly how to exploit it, his exploit will work on everyembedded system built on that platform.

In contrast, system diversification adds an additional and fullyautomatic step to the manufacturing process through which the softwareof the manufactured system is perturbed in a special way such that (a)the system continues to function exactly as before, but (b) all knownand as-yet-unknown vulnerabilities are “reshuffled” so thoroughly thatlocating even a single vulnerability amounts to a prohibitivelyimpractical random guessing attack in a very large search space.

Furthermore, each such perturbation is unique to the individualsystem to which it is applied, hence even if an attacker is extremelylucky and manages once to locate a vulnerability on his “practice”system, the attack will be utterly useless against any of the otherdiversified (but otherwise identical) embedded systems. This is truly abest of both worlds scenario for embedded system developers, as itprovides the convenience of developing on a general purpose (butessentially insecure) open platform and the ability to manufacture thesystems such that they are secure against known and unknown exploits.

First, an impractical example
So how might one go about actually diversifying embedded system priorto deployment in the field? For illustration purposes, here is aconcrete through ultimately impractical approach. Let’s say we havefinished building an embedded system on an open platform.

Now imagine changing all the executable code resident on the systemto use a different instruction set than the native hardware instructionset (but with the same semantics), and executing the code on a virtualmachine (which itself uses the native instruction set, of course).

With this setup, diversifying each system amounts to creating a newvirtual machine with a new instruction set and translating all softwareaccordingly. As promised, the system will function exactly as before,but now is immune against viruses, worms, buffer overflow attacks, rootkits, and other attacks that rely on knowledge of the nativeinstruction set. What about instruction set agnostic attacks? Theanswer is that any attack is coded in some “instruction set” andtherefore can be dealt with analogously.

While the above simplistic approach illustrates the point of systemdiversity, it is impractical for several reasons, such as losing the OSvendor’s support after modifying the OS code, performance penalties dueto virtual machine engagement, impracticality of patching the system inthe field, etc. So let’s instead look at some practical techniques wecan adopt in order to get some system diversity benefits.

Implement a “white list” mechanism
A white list is similar to a “guest list” at a private party. Itenumerates all the executable code that is authorized to execute onyour embedded system, and will block execution of all other code. Thecreation of the white list itself is a manufacturing issue and the listcan be prepared by the embedded system vendor.

The enforcement mechanism comprises some run-time code which can beplaced into the embedded system, in some cases without modifying theOS, though the details depend on which open platform is being used.Many OS platforms include open APIs for third parties to add newdrivers or modules that “hook” or register “callbacks” for specific OSkernel-level events.

For example, system backup and restore solutions employ file-systemhooks to do part of their work. For a white list, the implementationconcept is simple: use the OS hook to intercept OS events that canresult in launching a program, so that that your enforcer can “checkthe guest list.”

This function is exactly what anti-spyware tools do today, exceptthat they work off of a black list (and hence can miss some programsthat really should not run), rather than a white list that completelyspecifies all the code allowed to run on the embedded system.

Conceptually, this sounds pretty good. Each embedded system productis in effect closed, unable to run new code. Are we done yet? In mostcases, no. We can still ensure that only specifically authorized codeis launched, but alas in many cases that code contains securityvulnerabilities that can be used as launch pads for malicious code. Theaddress these aspects of closing a system, this launch-pad risk must beaddressed.

Control over executable memoryregions
At any time, any code residing on a fielded embedded system can befound to have exploitable vulnerabilities, regardless of whether thecode is vendor-written or platform/OS code. A large class of suchexploits involves tampering with running code, such as the well-known“buffer overflow” attacks which place new code into memory forexecution. Many such exploits can be thwarted simply by disallowingexecution of memory regions which are not intended to store code.

While implementations vary, it is easier to implement thisprotection if the embedded system comprises hardware with directsupport for memory execute protection, such as the provision of “noexecute” (NX) bits. Otherwise, software emulation of such functionalitywill have to be implemented, which may or may not require OS codemodification.

Again, this sounds pretty good, especially in cases where the OSand/or hardware provide direct support. Are we done yet? Not quite. Sofar, we can ensure that only authorized code can be launched, and oncelaunched it is not capable of serving as a base for new code injectedat runtime via security vulnerabilities. However, even such existingpre-loaded code can be repurposed fairly easily, so one more step isneeded.

Memory layout randomization
Another class of exploits that tamper with running code includesattacks that do not necessarily place new code into memory, but insteadreuse existing system code in an unexpected and potentially damagingway. A prominent category known as “return to libc” attacks tamperswith process memory to redirect execution in an unintended way.

Taking another step towards comprehensive system diversity, it ispossible to protect against many such exploits by randomizing thelayout of the process address space in a way that is unpredictable andimpractical to guess. While some commodity processors and operatingsystems provide native support for such randomization, many don’t.Therefore, depending on your embedded system platform and howaggressive a protection level you desire, there may be some workrequired in building the proper OS extensions that enable this(assuming the OS is extensible in this way).

From Factory to Field
So far, we have presented a number of system programming techniquesthat can be used to implement the “open to closed” approach where eachembedded system product becomes distinct at system build time, with thesame open OS base but not subject to some of the OS securityvulnerabilities. Are we done yet? Only in some cases. If you’re makinga product that is never serviced in the field and never needs to changein any way after leaving the factory, then you might be set with thetechniques we’ve discussed.

But for most modern embedded systems, there can often be productrequirements for occasional field upgrades to add new capabilitiesdesired by customers. That’s really the key point for the practical useof diversity: closing a system in the factory, yet allowing it toupgraded in the field. That’s what we will examine in Part 3 of thisarticle series.

Having outlined some concrete examples of techniques, we will followin Part 3 with a look at some specific types of embedded systems and gauge theeffort involved in implementing each of the outlined techniques,particularly for managing change.

E. John Sebes is the CTO of Solidcore Systems.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.