In today’s highly complex and highly connected embedded systems, it is difficult to get a good feel for the different factors that might affect the “exploitability” of your embedded system or device.
However, I have found that a good 'back of the napkin' way to get a sense of an application’s vulnerabilities is to show it mathematically.
Obviously, your device is, or can be, more vulnerable if it has to interface with more devices. This can be expressed by showing remote exploitability Er as growing with the number of interfaces v, as in Equation 1. Note that r(tau) is always greater than zero, because physical access always counts as one way in. At some point, someone (albeit maybe only a trusted someone) will have physical access to your device.
But there are many more factors left to consider. Today's embedded systems have more processing power and are more crammed with features and functions than ever before. There is a lot of data that supports the premise that with more code and more complexity comes more errors. Studies show the average number of latent defects in every thousand lines of code ranges from 3 to 45. Let's go with an average number of 7.5, and put that into the formula in Equation 2, which shows the approximate number of firmware-relates exploits Ec in your system. We'll use n for the lines of code in a module, and M for the number of modules. To practice good computer science, we'll also add a factor for average complexity V for each module g.
We're doing all this because there is good data to support that bugs are a significant, though not the largest, source of security weaknesses. So let's add a constant, average probability of a bug becoming an exploitable vulnerability, P(e) into Equation 2.
If we add a weight to each exploitability factor, p for the exploitability through remote connections, and q for the security weaknesses in the code, we get the abridged form of Eric's Software Exploitability Equation in Equation 3. Note that p is much greater than q because inter-connections are typically a greater source for potential exploits than are firmware bugs. This is logically true because with fewer interfaces, thee are fewer vectors from which to launch an attack. It is not a product relationship, though, because not all weaknesses are exploitable through all vectors.
As I said, this is the abridged form, and I left out several factors. Equation 3 illustrates, albeit in an intentionally oversimplified way, that the number of security weaknesses in your implementation increases with the connectivity to, lines of code in, and complexity of your device.
Remember, though, that I'm showing you a relationship here between exploitability and its many factors, I am not giving you a metric to judge how much attention you should pay to security.
The equation does rightfully imply that a smaller, simpler, isolated system designed to do a particular task should have fewer potential software vulnerabilities than its larger, more connected cousin designed for the identical function. Remember also that I'm still talking about potential software vulnerabilities. Bad assumptions are even more to blame for vulnerabilities than software, but I will cover that in another article.
I hope I’ve got you thinking about security for your device, and I would love to hear how it works out for you. Please send me your comments and your experiences to .
Eric Uner is currently researching next-generation mobile and embedded security architectures for Motorola Labs in Schaumburg, Il., focusing on increasing the trust level of such devices.