Bad assumptions lead to bad security - Embedded.com

Bad assumptions lead to bad security

Bad assumptions on the part of embedded system developers are more to blame for their security problems than are software security weaknesses. I have noticed a pattern of common assumptions, and I am convinced that over half of all our security problems would disappear overnight if we could all change our thinking a bit in just three ways:

Assumption 1) Developers think that embedded systems are inherently more secure

Nothing could be further than the truth, but I can see why developers feel this way. No one has their source code, the firmware may not even have a commercial RTOS, and even developers on the same team may not know that one picked Flash memory address 0xsomething to store the secret key, and the other decided to put the software version number at 0xsomething-else. In the security space, we call this “security by obscurity,” and it never, never works. The truth is that hackers have access to the same debugging tools that you do. They can read symbol tables, step though the assembly, etc. Consider also that many attacks don't even need code-level knowledge.

An attack on a Digital Rights management (DRM) system that stores the number of times a song was played in flash memory, for example, may be to simply copy the whole flash bank, and keep overwriting the entire flash image from the stored copy before playing the song again. This resets the system back to a previous state, effectively getting around any play counters or sequence numbers.

For another example, let's look at the memory dump in Figure 1 below , from an embedded system that manages several user logins. Perhaps I got this memory from attaching a debugger, or perhaps it showed up as padding in a network packet.


0x75 0x6e 0x65 0x02 u n e .
0x73 0x6E 0x61 0x69 s n a i
0x6C 0x69 0x6C 0x03 l i l .
0x7A 0x65 0x70 0x70 z e p p
Figure 1: Sample Memory Contents


Now, you have no idea what RTOS or even what processor this thing is running. But you know my last name, “Uner.” Closer inspection shows the first 32-bit word contains part of my name as ASCII text. It could be a login. What follows is ASCII as well. If I were a hacker, I'd try “snail” for my password. Looking at the pattern, I may also try a user “lil”+something with password “zepp”+something, as they look to have a different user level or some other distinction appearing in the least significant byte. No source code inspection here, or known vulnerability – just simple hacker technique.

Assumption 2) Users assume that embedded systems are more secure

If you bought a new JTAG wriggler or some other debugging device that had an Ethernet port on it, would you go and tell your IT department about it? Probably not. But you may not realize that this little bugger is running embedded Linux, and is now a new target for hackers on your network. Too often do the users of our products, including ourselves, assume that something without a monitor and a keyboard is somehow secure. Most developers would blame the users for any incidents, such as when search engines revealed the images coming from embedded Linux cameras because the cameras responded to fixed URL's.

This is the user's fault for not taking the proper precautions, right? Not if the developer did not explain to the user any assumptions about the environment the device was supposed to operate in (e.g. a closed network). And remember that “explaining” in this case does not mean putting it in the manual that the user will leave in the box with the packing the device came in. It means requiring acknowledgment of this during set up, or putting features inside the device that verify it is operating within an expected environment. A networked controller may, for example, require the user to establish only a set of non-routable IP addresses or MAC addresses that the device will respond to. This data can be spoofed, but the extra configuration step helps convey intent to the user that the controller is not meant to be put on the Internet. We're going to revisit this topic later.

Assumption 3)Security posture requirements are all about protecting assets

Security people live by a mantra that a system is secure if the resources it requires to hack a device are more valuable than what the device is trying to protect. In the security space, we call what is worth protecting the “assets.” There are temporal characteristics of assets as well as simple value. You could say that a device is secure if the data it is protecting is useless by the time the device has been cracked – like getting a note that I'm on my way to the airport after my plane had touched down at my destination. Makes sense. You wouldn't buy a $100,000 titanium safe to keep your spare change in (but if you would, call me about an investment opportunity).

The trouble is that this breaks apart as many times as it works. In many cases, you have no idea what the value of the data or operation you need to protect is. A robot arm developer can't always know ahead of time how expensive an attack that causes the device to malfunction is, because it may be deployed in a factory line where downtime costs $10,000 an hour or $10,000 a minute. And who could put a price on a human life when injury or death occurs because of a hacker? I won't dwell on this one, because the assumption about assets does make sense in calculating resources to break cryptographic algorithms when you know what data you're protecting, and indeed in any case where you know the value of your assets a priori..

I hope I’ve got you thinking about security for your device, and I would love to hear how it works out for you. Please send me your comments and your experiences at .

Eric Uner is currently researching next-generation mobile and embedded security architectures for Motorola Labs in Schaumburg, Il., focusing on increasing the trust level of such devices.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.