A Framework for Considering Security in Embedded Systems
Putting the Framework to a testThe best way to walk through what I suggest is by example. Lets imagine a fictitious robotic temperature sensor that must open a cooling valve on one of three vents. A basic diagram of the device appears in Figure 3 below.

For the purposes of this example, we'll use the general model of the development life cycle: requirements, design, implementation, then testing. The need to apply the framework at certain cycles is sometimes very subtle, so I'll highlight some of those times as we go.
Consider the Environment. Perfect security is nearly impossible, but an appropriate level of trust and security assurance for a given environment is completely achievable. When you work on a yellow, bumble-bee shaped calculator meant to be sold next to that impulse buy section next to the candy bars in the grocery store, you instinctively don't have the same level of concern about security as you do when thinking about our robot sensor. This instinctive, gut feel you have is because you are rightfully thinking about the environment that the two would be used in. So it all comes down to assumptions about the environment.
From the diagram in Figure 3, we can see that our device is connected to a TCP/IP network in order to send messages to a logging terminal. I think we can assume that the sensor will be used in some kind of industrial environment (as opposed to a kitchen appliance or in a car). That means it's going to be tough to determine cost associated with a security incident, because we don't know anything about where and exactly why the sensor is deployed. We can assume, however, that several people, some of which may be hackers, will have physical access to the device and the network, and that they will be motivated to hack it. If nothing else, our sensor could be used as a "zombie" to send spurious network traffic and disrupt other devices. We need to consider what impact the environment has on us, as well as impact what we have on our environment.
During the requirements phase, take the time to list all of these assumptions. At each phase, take some time to revisit the list and make sure you haven't made any new assumptions. For example, we assumed hostile hackers would have access to the network. If a developer working on the TCP/IP stack fails to think about Denial of Service (DoS) attacks, malformed packets, and the myriad of TCP/IP attacks which are documented all over the Internet, then the developer has made a bad assumption that the IT department would take care of all this. This is in contradiction with our original assumptions.
Speaking of developers, they need to remember these assumptions, and even include them as comments with the source code, at the module level. There is little arguing the virtues of software reuse, but reuse is also a major introduction point for security vulnerabilities. The developer for our sensor may grab some TCP stack or SNMP code that was perfect for it's original product, but has subtleties like requiring a memory manager to zeroize buffers at allocation time. Not doing so could cause sensitive data to leak out over the network, as in the earlier user login and password example. Software reuse in a secure environment is a tricky, delicate thing that merits more consideration that I can give it here. Please keep this in mind.
Fill in the details
When we are all done with our list from this step in the framework, we should have all the following factors filled out with as much detail as possible:
1) Intended usage - e.g. as part of an industrial assembly line, connected to a closed network
2) Possible consequences of attack - e.g. line stops, explosive decompression and injury
3) Access policies - e.g. no public access physically, password on device to be managed by IT
4) Possible attackers - e.g. disgruntled employees, industrial hackers
5) Threat vectors - e.g. hacking the firmware through debug port, network attack
6) Assets that require protection – e.g. secret keys, operational data
7) Motivations for attack - e.g. industrial espionage or sabotage
What are your security objectives? The objectives are derived from the list we made in considering our environment in Figure 3. Although our list of objectives may be more specific and result in a longer list than the one from the previous step, each objective should trace back to something about our environment. Otherwise, why is it an objective? Go through all of the items from the "Environment" step and list the associated security objective. At this point, don't worry about exactly how you are going to accomplish any of this, just stay focused on what it is you need to accomplish. By way of example, we listed "hacking the firmware through the debug port" as a threat. So now it's decision time for our associated objective. We can decide either:
A) The organization operating the sensor is responsible for ensuring that no unauthorized persons have physical access to the sensor before, during, and after installation; or
B) The sensor will be designed in such a way as to minimize the possibility of unauthorized modification of the firmware.
"A" seems like kind of a cop-out, and if it feels like you're sweeping something under the rug, it’s because you are. "B" will be a more difficult design, to be sure. But it is the only option that matches your security environment.
Another one we listed was "network attack". We need to break this down into more specific objectives such as "the sensor will provide separation between spurious network traffic and primary operation." This innocent looking little sentence says that no matter what someone does on the network, the sensor will keep opening and closing those vents. At first take this may seem like a standard hard real-time objective, but in fact what it means is that when an attacker is sending you malformed network packets, such as TCP packet asking your device to respond to itself ("Land" Attack), you won't let that affect operations. Your objective is to prevent an error in your environment (accidental or malicious) from becoming an error in your device's operations. Since we assumed that the closed network in our environment might have hostile devices on it, such as a PC infected with a virus, we have to be able to handle this kind of network traffic, even if it does not obey specifications for any protocols we are using.
Functional requirements of your secure application. Now we move into requirements mode. Just as objectives are derived from the environment, requirements are derived from the objectives. For each of the objectives from the previous step, list an associated requirement, but keep in mind the "environment" component as well.
For example, one of our objectives for the sensor was firmware protection. Our environment included a network and hostile insiders. Our objective was to keep network-based hosts or insiders from altering our firmware. If we had assumed a closed network where internal security procedures kept even inside attacks away from our device, a simple CRC may have been enough to ensure the device was running the intended code. Since we assumed a higher threat level, however, we need to look at trusted boot features available in processors from vendors including Freescale or Intel. These features help verify that the device boots up running the correct code. If other requirements, such as power consumption or physical constraints, preclude the use of these kinds of processors, we may need to revisit our assumptions. This is why the arrow between the security framework and the requirements in figure 6 goes both ways. If you cannot change your assumptions about the environment, however, the security requirement must take precedence over any other.
Besides the "nobility" of security requirements, another important difference between functional security requirements and traditional requirements is that, fundamentally, not all good top-level security requirements are fully testable. What? Non-testable requirements? Many functional security requirements are inverse requirements, and inverse requirements fundamentally are non-testable.
Consider our objective of being resilient to spurious network traffic. This could lead to the requirement that "the device must not be susceptible to the following attacks: (insert long list of known network attacks like Land, sequence number vulnerabilities, etc.)." Seems simple enough, and there are many tools we can test our implementation against (e.g. Nessus from www.nessus.org, which runs a battery of individual tests). But what about all the combinations of these attacks in different orders? A DoS attack may temporarily use up resources, for example, and cause the device to be susceptible to an attack where it was not before. If there are 27 known attacks, we have 27! test cases, or about 100,000,000,000,000,000,000,000,000 tests. Clearly, we cannot test them all.
One way to deal with this is to decompose the security requirements into as many testable elements as we can. For example, we can further decompose our TCP/IP attack requirement to incorporate a specific order to the Nessus tests. We still can't test against all combinations of attacks, and we can’t test for weaknesses that we don't know about yet. To do so, we would have to generate all possible network traffic in all possible states of our device at all times, which, in an infinite universe, is an infinite set of test cases. We can, however, test for a “reasonable” subset of all possible attacks, where reasonable means achievable and appropriate to our environment.
Some closing comments
Writing security requirements isn't easy, but following the three steps in the framework can help. If you do this right, you will find that it will impact every aspect of your process, including business processes (see Figure 4 below).

Remember to keep in mind some of the gotchas and subtleties I mentioned along the way, as this will save you iterations through your process. Consider your environment, but bear in mind attackers will try to alter aspects of your environment. So spend some considerable time finding all those variables that could affect your device. When you write your objectives, be sure to match every environmental threat with a countermeasure, and remember that countermeasures are not always firmware-based – some of them will be policy or procedure-based. If your objective includes protected access, and your requirement is for a password, be sure you design both policies to deal with users not protecting their passwords, as well as functional requirements that help enforce your policies (by the way, the BBC recently reported a survey where over 70% of people would reveal their computer password in exchange for a bar of chocolate). Lastly, try not to spend too much time on the philosophy of non-testable requirements. Simply do your best to come up with test cases that are suitable for your environment, and that will get you reasonable coverage.
I hope I’ve got you thinking about security for your device, and I would love to hear how it works out for you. Please send me your comments and your experiences at eric@uner.com.
Eric Uner is currently researching next-generation mobile and embedded security architectures for Motorola Labs in Schaumburg, Il., focusing on increasing the trust level of such devices. He is also the author of another embedded security article on this site: ”Calculating the Exploitability of Your Embedded Software”


Loading comments... Write a comment