A Framework for Considering Security in Embedded Systems - Embedded.com

A Framework for Considering Security in Embedded Systems

The need for security in many embedded systems is not always readily apparent, and too many embedded systems designers are paying too little attention to the subject, despite the increased wired and wireless connectivity of such designs and potential hazards that represents.

In this article, I will suggest some reasons you should pay more attention to this issue and provide you with a simple framework for getting started in addressing security in your designs, even when you don't think you need to.

I'm sure you already know that embedded systems form the dominant basis of computing today. I like to think of them like insects. Most people feel that the human animal dominates the Earth, when in fact it is the ubiquitous insect. There are nearly 200 million of them for every one of us, and our very survival as a species depends on them. Aside from the ones that are pests, however, insects go about much of their lives unnoticed. In the same way, though the average person remains largely unaware of the existence of embedded systems, we all rely completely on these usually small, deceptively simple little – and increasingly wired and wirelessly connected — insectoid devices to run our most critical services.

All this increased connectivity is a major source of security weaknesses. Embedded systems used to be blind ants, but these days the ants are required to communicate with the beetles, and do it all inside a bee hive. My earlier insect analogy successfully over-extended, I am referring to the fact that today's systems are more inter-connected. Whereas a sensor that used to have a physical readout on it was sufficient ten years ago, today's sensors may be part of an ad-hoc 802.11 wireless network (Figure 1 below ), or they may report their readings back to a central management console over TCP/IP. All these new lines of external communication represent new attack vectors for hackers. The more ways in, the more exploitable your device may be.

So what am I saying? That in addition to knowing about rate monotonic analysis and priority inversion, now we need to be IT experts as well? Actually, that wouldn't help in most cases. Security issues often stem from system-level design issues, and have nothing at all to do with whether or not someone configured a firewall appropriately, or whether or not the device was designed to withstand attacks from the Internet.

It doesn't even always have to do with software. Where I live, atop nearly every intersection with a traffic light there is a little sensor to detect transmissions from “Mobile Infrared Transmitters.” MIRTs are used by emergency vehicles to preempt the normal traffic actuated lights and sensors in an attempt to remove any possible traffic backups. “Hobbyists” have created their own MIRTs, which is not only illegal (as far as I can tell) but obviously could create major traffic messes in congested areas. The bottom line is that the sensors have no way to tell a “valid” MIRT in an ambulance from a bogus transmitter in some random person's car.

Of course, there are plenty of cases of “consumer level” hacking against individual devices as well. A major camera vendor once made two versions of their product with similar hardware, but the less expensive version lacked some of the features of the more expensive model. Hackers realized they could “update” the firmware on the cheaper model, effectively getting capabilities they did not pay for.

There are some indications that the problem is getting worse. A report from the United States General Accounting Office (GAO) suggests an “escalation of the risks of cyber attacks against control systems.” This is supported by other reports from the British Columbia Institute of Technology, as well as the general increase in information security vulnerabilities reported by CERT and other IT-centric entities.

This actually makes complete sense, and understanding why will help you avoid some security-related pitfalls. It all breaks down into the fact that bigger, more complex systems have more security weaknesses, and that designers and customers of embedded systems have been making some bad assumptions (see Embedded Soapbox).

Important facts about embedded security
Before I get into the framework that I suggest using to address all these issues, I need to give you two important facts that are absolutely critical from here on out:

1) Security is not all about encryption. It's also about policy, procedure, and implementation. Case in point, encryption based on a secret key is only as good as the policy that controls access to the key.
2) Secure code alone does not a secure system make. You must consider security at each phase of the process, from requirements to design to testing, and even support.

If I can convince you to do nothing else, print these out two items and post them on the bathroom doors (everyone eventually goes there). They are my security mantras. They are also fundamental for working within my suggested framework.

A Framework for Evaluating Security
What to do about all the issues that I have brought up so far would take me much more than one article to cover, but here is a basic framework within which to start considering the security of your device:

1) Environment: Determine the assumptions, threats, and required policies for the environment you are designing the device to operate in.,
2) Objectives: Determine your device's security objectives. Consider the data (assets) or operation it will protect and which threats from step 1 require countermeasures.
3) Requirements: Determine your functional security requirements.

Each component is used in determining the elements of the next, as shown in Figure 2, below . Working in this hierarchy will prevent unnecessary security requirements, which occur more often than you would think. For example, a device may have a requirement to encrypt all event messages. Many designers just toss encryption in as a substitute for actual security. If you define your environment to be a closed network, or perhaps if your device is running in a car with eight byte messages on a CAN bus, you may be placing unnecessary demands on your device. Unless you can trace a requirement back to something about your environment, you are just adding processing or data transmission overhead.

This framework is essentially a super-condensed form of the Common Criteria. The CC is an international effort to standardize a way to evaluate the security posture of any product, though it is most often applied to IT systems like firewalls or desktop computer software. It is not an evaluation per se, but rather a method for evaluating. I won't go into the details or the specific criteria language, but you can find out more at the CC Website.

It suffices to say that I find keeping even a small subset of the CC in mind, even if you do not intend to submit for an evaluation under the criteria, can result in a more secure, stable, and safer product. For those of you already working under security requirements, you can use this framework around more specific requirements such as FIPS, or environments such as SCADA or the use of a TPM.

Putting the Framework to a test
The best way to walk through what I suggest is by example. Lets imagine a fictitious robotic temperature sensor that must open a cooling valve on one of three vents. A basic diagram of the device appears in Figure 3 below .

For the purposes of this example, we'll use the general model of the development life cycle: requirements, design, implementation, then testing. The need to apply the framework at certain cycles is sometimes very subtle, so I'll highlight some of those times as we go.

Consider the Environment . Perfect security is nearly impossible, but an appropriate level of trust and security assurance for a given environment is completely achievable. When you work on a yellow, bumble-bee shaped calculator meant to be sold next to that impulse buy section next to the candy bars in the grocery store, you instinctively don't have the same level of concern about security as you do when thinking about our robot sensor. This instinctive, gut feel you have is because you are rightfully thinking about the environment that the two would be used in. So it all comes down to assumptions about the environment.

From the diagram in Figure 3 , we can see that our device is connected to a TCP/IP network in order to send messages to a logging terminal. I think we can assume that the sensor will be used in some kind of industrial environment (as opposed to a kitchen appliance or in a car). That means it's going to be tough to determine cost associated with a security incident, because we don't know anything about where and exactly why the sensor is deployed. We can assume, however, that several people, some of which may be hackers, will have physical access to the device and the network, and that they will be motivated to hack it. If nothing else, our sensor could be used as a “zombie” to send spurious network traffic and disrupt other devices. We need to consider what impact the environment has on us, as well as impact what we have on our environment.

During the requirements phase, take the time to list all of these assumptions. At each phase, take some time to revisit the list and make sure you haven't made any new assumptions. For example, we assumed hostile hackers would have access to the network. If a developer working on the TCP/IP stack fails to think about Denial of Service (DoS) attacks, malformed packets, and the myriad of TCP/IP attacks which are documented all over the Internet, then the developer has made a bad assumption that the IT department would take care of all this. This is in contradiction with our original assumptions.

Speaking of developers, they need to remember these assumptions, and even include them as comments with the source code, at the module level. There is little arguing the virtues of software reuse, but reuse is also a major introduction point for security vulnerabilities. The developer for our sensor may grab some TCP stack or SNMP code that was perfect for it's original product, but has subtleties like requiring a memory manager to zeroize buffers at allocation time. Not doing so could cause sensitive data to leak out over the network, as in the earlier user login and password example. Software reuse in a secure environment is a tricky, delicate thing that merits more consideration that I can give it here. Please keep this in mind.

Fill in the details
When we are all done with our list from this step in the framework, we should have all the following factors filled out with as much detail as possible:

1) Intended usage – e.g. as part of an industrial assembly line, connected to a closed network
2) Possible consequences of attack – e.g. line stops, explosive decompression and injury
3) Access policies – e.g. no public access physically, password on device to be managed by IT
4) Possible attackers – e.g. disgruntled employees, industrial hackers
5) Threat vectors – e.g. hacking the firmware through debug port, network attack
6) Assets that require protection – e.g. secret keys, operational data
7) Motivations for attack – e.g. industrial espionage or sabotage

What are your security objectives? The objectives are derived from the list we made in considering our environment in Figure 3 . Although our list of objectives may be more specific and result in a longer list than the one from the previous step, each objective should trace back to something about our environment. Otherwise, why is it an objective? Go through all of the items from the “Environment” step and list the associated security objective. At this point, don't worry about exactly how you are going to accomplish any of this, just stay focused on what it is you need to accomplish. By way of example, we listed “hacking the firmware through the debug port” as a threat. So now it's decision time for our associated objective. We can decide either:

A) The organization operating the sensor is responsible for ensuring that no unauthorized persons have physical access to the sensor before, during, and after installation; or
B) The sensor will be designed in such a way as to minimize the possibility of unauthorized modification of the firmware.

“A” seems like kind of a cop-out, and if it feels like you're sweeping something under the rug, it’s because you are. “B” will be a more difficult design, to be sure. But it is the only option that matches your security environment.

Another one we listed was “network attack”. We need to break this down into more specific objectives such as “the sensor will provide separation between spurious network traffic and primary operation.” This innocent looking little sentence says that no matter what someone does on the network, the sensor will keep opening and closing those vents. At first take this may seem like a standard hard real-time objective, but in fact what it means is that when an attacker is sending you malformed network packets, such as TCP packet asking your device to respond to itself (“Land” Attack), you won't let that affect operations. Your objective is to prevent an error in your environment (accidental or malicious) from becoming an error in your device's operations. Since we assumed that the closed network in our environment might have hostile devices on it, such as a PC infected with a virus, we have to be able to handle this kind of network traffic, even if it does not obey specifications for any protocols we are using.

Functional requirements of your secure application. Now we move into requirements mode. Just as objectives are derived from the environment, requirements are derived from the objectives. For each of the objectives from the previous step, list an associated requirement, but keep in mind the “environment” component as well.

For example, one of our objectives for the sensor was firmware protection. Our environment included a network and hostile insiders. Our objective was to keep network-based hosts or insiders from altering our firmware. If we had assumed a closed network where internal security procedures kept even inside attacks away from our device, a simple CRC may have been enough to ensure the device was running the intended code. Since we assumed a higher threat level, however, we need to look at trusted boot features available in processors from vendors including Freescale or Intel. These features help verify that the device boots up running the correct code. If other requirements, such as power consumption or physical constraints, preclude the use of these kinds of processors, we may need to revisit our assumptions. This is why the arrow between the security framework and the requirements in figure 6 goes both ways. If you cannot change your assumptions about the environment, however, the security requirement must take precedence over any other.

Besides the “nobility” of security requirements, another important difference between functional security requirements and traditional requirements is that, fundamentally, not all good top-level security requirements are fully testable. What? Non-testable requirements? Many functional security requirements are inverse requirements, and inverse requirements fundamentally are non-testable.

Consider our objective of being resilient to spurious network traffic. This could lead to the requirement that “the device must not be susceptible to the following attacks: (insert long list of known network attacks like Land, sequence number vulnerabilities, etc.).” Seems simple enough, and there are many tools we can test our implementation against (e.g. Nessus from www.nessus.org, which runs a battery of individual tests). But what about all the combinations of these attacks in different orders? A DoS attack may temporarily use up resources, for example, and cause the device to be susceptible to an attack where it was not before. If there are 27 known attacks, we have 27! test cases, or about 100,000,000,000,000,000,000,000,000 tests. Clearly, we cannot test them all.

One way to deal with this is to decompose the security requirements into as many testable elements as we can. For example, we can further decompose our TCP/IP attack requirement to incorporate a specific order to the Nessus tests. We still can't test against all combinations of attacks, and we can’t test for weaknesses that we don't know about yet. To do so, we would have to generate all possible network traffic in all possible states of our device at all times, which, in an infinite universe, is an infinite set of test cases. We can, however, test for a “reasonable” subset of all possible attacks, where reasonable means achievable and appropriate to our environment.

Some closing comments
Writing security requirements isn't easy, but following the three steps in the framework can help. If you do this right, you will find that it will impact every aspect of your process, including business processes (see Figure 4 below ).

Remember to keep in mind some of the gotchas and subtleties I mentioned along the way, as this will save you iterations through your process. Consider your environment, but bear in mind attackers will try to alter aspects of your environment. So spend some considerable time finding all those variables that could affect your device. When you write your objectives, be sure to match every environmental threat with a countermeasure, and remember that countermeasures are not always firmware-based – some of them will be policy or procedure-based. If your objective includes protected access, and your requirement is for a password, be sure you design both policies to deal with users not protecting their passwords, as well as functional requirements that help enforce your policies (by the way, the BBC recently reported a survey where over 70% of people would reveal their computer password in exchange for a bar of chocolate). Lastly, try not to spend too much time on the philosophy of non-testable requirements. Simply do your best to come up with test cases that are suitable for your environment, and that will get you reasonable coverage.

I hope I’ve got you thinking about security for your device, and I would love to hear how it works out for you. Please send me your comments and your experiences at .

Eric Uner is currently researching next-generation mobile and embedded security architectures for Motorola Labs in Schaumburg, Il., focusing on increasing the trust level of such devices. He is also the author of another embedded security article on this site: ”Calculating the Exploitability of Your Embedded Software”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.