A Framework for Considering Security in Embedded Systems

Eric Uner

September 13, 2005

Eric Uner

The need for security in many embedded systems is not always readily apparent, and too many embedded systems designers are paying too little attention to the subject, despite the increased wired and wireless connectivity of such designs and potential hazards that represents.

In this article, I will suggest some reasons you should pay more attention to this issue and provide you with a simple framework for getting started in addressing security in your designs, even when you don't think you need to.

I'm sure you already know that embedded systems form the dominant basis of computing today. I like to think of them like insects. Most people feel that the human animal dominates the Earth, when in fact it is the ubiquitous insect. There are nearly 200 million of them for every one of us, and our very survival as a species depends on them. Aside from the ones that are pests, however, insects go about much of their lives unnoticed. In the same way, though the average person remains largely unaware of the existence of embedded systems, we all rely completely on these usually small, deceptively simple little – and increasingly wired and wirelessly connected -- insectoid devices to run our most critical services.

All this increased connectivity is a major source of security weaknesses. Embedded systems used to be blind ants, but these days the ants are required to communicate with the beetles, and do it all inside a bee hive. My earlier insect analogy successfully over-extended, I am referring to the fact that today's systems are more inter-connected. Whereas a sensor that used to have a physical readout on it was sufficient ten years ago, today's sensors may be part of an ad-hoc 802.11 wireless network (Figure 1 below), or they may report their readings back to a central management console over TCP/IP. All these new lines of external communication represent new attack vectors for hackers. The more ways in, the more exploitable your device may be.

So what am I saying? That in addition to knowing about rate monotonic analysis and priority inversion, now we need to be IT experts as well? Actually, that wouldn't help in most cases. Security issues often stem from system-level design issues, and have nothing at all to do with whether or not someone configured a firewall appropriately, or whether or not the device was designed to withstand attacks from the Internet.

It doesn't even always have to do with software. Where I live, atop nearly every intersection with a traffic light there is a little sensor to detect transmissions from "Mobile Infrared Transmitters." MIRTs are used by emergency vehicles to preempt the normal traffic actuated lights and sensors in an attempt to remove any possible traffic backups. "Hobbyists" have created their own MIRTs, which is not only illegal (as far as I can tell) but obviously could create major traffic messes in congested areas. The bottom line is that the sensors have no way to tell a "valid" MIRT in an ambulance from a bogus transmitter in some random person's car.

Of course, there are plenty of cases of "consumer level" hacking against individual devices as well. A major camera vendor once made two versions of their product with similar hardware, but the less expensive version lacked some of the features of the more expensive model. Hackers realized they could "update" the firmware on the cheaper model, effectively getting capabilities they did not pay for.

There are some indications that the problem is getting worse. A report from the United States General Accounting Office (GAO) suggests an "escalation of the risks of cyber attacks against control systems." This is supported by other reports from the British Columbia Institute of Technology, as well as the general increase in information security vulnerabilities reported by CERT and other IT-centric entities.

This actually makes complete sense, and understanding why will help you avoid some security-related pitfalls. It all breaks down into the fact that bigger, more complex systems have more security weaknesses, and that designers and customers of embedded systems have been making some bad assumptions (see Embedded Soapbox).

Important facts about embedded security
Before I get into the framework that I suggest using to address all these issues, I need to give you two important facts that are absolutely critical from here on out:

1) Security is not all about encryption. It's also about policy, procedure, and implementation. Case in point, encryption based on a secret key is only as good as the policy that controls access to the key.
2) Secure code alone does not a secure system make. You must consider security at each phase of the process, from requirements to design to testing, and even support.

If I can convince you to do nothing else, print these out two items and post them on the bathroom doors (everyone eventually goes there). They are my security mantras. They are also fundamental for working within my suggested framework.

A Framework for Evaluating Security
What to do about all the issues that I have brought up so far would take me much more than one article to cover, but here is a basic framework within which to start considering the security of your device:

1) Environment: Determine the assumptions, threats, and required policies for the environment you are designing the device to operate in.,
2) Objectives: Determine your device's security objectives. Consider the data (assets) or operation it will protect and which threats from step 1 require countermeasures.
3) Requirements: Determine your functional security requirements.

Each component is used in determining the elements of the next, as shown in Figure 2, below . Working in this hierarchy will prevent unnecessary security requirements, which occur more often than you would think. For example, a device may have a requirement to encrypt all event messages. Many designers just toss encryption in as a substitute for actual security. If you define your environment to be a closed network, or perhaps if your device is running in a car with eight byte messages on a CAN bus, you may be placing unnecessary demands on your device. Unless you can trace a requirement back to something about your environment, you are just adding processing or data transmission overhead.

This framework is essentially a super-condensed form of the Common Criteria. The CC is an international effort to standardize a way to evaluate the security posture of any product, though it is most often applied to IT systems like firewalls or desktop computer software. It is not an evaluation per se, but rather a method for evaluating. I won't go into the details or the specific criteria language, but you can find out more at the CC Website.

It suffices to say that I find keeping even a small subset of the CC in mind, even if you do not intend to submit for an evaluation under the criteria, can result in a more secure, stable, and safer product. For those of you already working under security requirements, you can use this framework around more specific requirements such as FIPS, or environments such as SCADA or the use of a TPM.

< Previous
Page 1 of 2
Next >

Loading comments...

Most Commented

  • Currently no items

Parts Search Datasheets.com

KNOWLEDGE CENTER