Design Con 2015

Security fundamentals for embedded software

David Kalinsky

March 24, 2012

David KalinskyMarch 24, 2012


Click for more content from ESD April 2012
I was preparing for a trip to the Eastern European city where my parents had lived as children. I had never been there. I googled the name of the city, and was quickly led to a story that was surprising and chilling:  A high school student there had modified a TV remote  control so that it could control the city's tram system--thus converting the urban railways into his own giant model train set. While switching tracks using his infrared gadget, this kid caused trams to derail. Twelve people were injured in one derailment.1

Recently, new terms like Stuxnet and Duqu have entered our lexicon. Embedded systems including those that do supervisory control and data acquisition (SCADA) are under relentless security attacks.

Many embedded software developers feel that embedded systems security should be handled at the systems-engineering level or by the hardware that surrounds their software. And indeed many things can be done at those levels, including:

•    Secure network communication protocols.
•    Firewalls.
•    Data encryption.
•    Authentication of data sources.
•    Hardware-assisted control-flow monitoring.


David Kalinsky
ESC DESIGN West 2012 speaker logoDavid Kalinsky is a teacher of intensive short courses on embedded systems and software development for professional engineers. One of his popular courses is "Introduction to Software Security for Embedded." His courses are presented regularly in open-class format at technical training providers in international locations such as Munich, Singapore, Stockholm, and Tel-Aviv, as well as in his "home market" of the USA. See www.kalinskyassociates.com.

ESC classes:

Dave Kalinsky will teach these classes at ESC/DESIGN West in March, 2012:

ESC-101: Software Design for Multicore Systems—2012 EDITION

8:00 AM to 12:00 PM, March 26

ESC-402: Is Static Code Analysis Ready for Real-time?

3:15 PM–4:15 PM, March 29

But these traditional techniques aren't enough, as was frighteningly described at last year's DesignCon East 2011 talk "Strong Encryption and Correct Design are Not Enough: Protecting Your Secure System from Side Channel Attacks." The speaker outlined how power consumption measurements, electromagnetic leaks, acoustic emissions, and timing measurements can give attackers information they can use to attack your embedded device.

Clearly then, system-level and hardware defenses are not enough. Most security attacks are known to exploit vulnerabilities within application software. Vulnerabilities are introduced into our embedded systems during software design and development. Since system-level and hardware defenses against security attacks are far from perfect, we need to build a third line of defense by dealing with vulnerabilities in our application software.

While our software line of defense will surely be less than perfect, we need to work on that line of defense with the immediate objective of reducing the size of the "attack windows" that exist in our software. The very first step in doing this is to try to think like an attacker. Ask how an attacker could exploit your system and your software in order to penetrate it. You might call this a threat analysis. Use the results to describe what your software should not do. You might call those abuse cases. Use them to plan how to make your software better resist, tolerate or recover from attacks.

Don't forget that our attackers have a big advantage when it comes to embedded systems: Most embedded software has severe execution time constraints, often a mixture of hard real-time and soft real-time tasks. This coaxes us to design application software that is "lean and mean," by reducing to a minimum intensive run-time limit checking and reasonableness checking (for example, invariant assertions) in order to meet timing requirements. Our attackers have no such execution time constraints: They are perfectly happy to spend perhaps weeks or months researching, preparing, and running their attacks--possibly trying the same attack millions of times in the hope that one of those times it might succeed, or possibly trying a different attack each day until one hits an open "attack window."

How can attackers attack via our own software?
Quite often embedded software developers dismiss the issue of embedded software security, saying: "Hey, our device will never connect to the Internet or to any other external communication link. So we're immune to attack." Unfortunately, this is naïve and untrue. I'd like to present a counterexample:

Many embedded devices use analog-to-digital-converters (ADCs) for data acquisition. These ADCs may be sampled on a regular timed basis, and the data samples stored by application software in an array. Application software later processes the array of data. But an attacker could view this in a totally different way: "What if I fed the ADC with electrical signals that, when sampled, would be exactly the hexadecimal representation of executable code of a nasty program I could write?" In that way, the attacker could inject some of his software into your computer. No network or Internet needed.

Seems like a lot of work to build an "ADC Code Injector" device just for this purpose. But the attacker might not be just a high-school kid. He might be a big industrial espionage lab, or a large, well-funded team working at the national laboratory of a foreign government.



Figure 1: Typical normal stack layout.
Click on image to enlarge.
Now, how could he get your processor to execute his program that he's injected? He might gamble that your software stores the ADC data array on a stack (perhaps using alloca() or malloca() ). If his luck is good, he could cause an array overflow, possibly by toying with the hardware timer that controls the ADC data sampling. A typical normal stack layout is shown in Figure 1.


Figure 2: Stack is corrupted after array overflow.
Click on image to enlarge.
If the attacker succeeds in causing an array overflow, the stack could become corrupted, as shown in Figure 2. Note that "return address" was stored on the stack at a location beyond the end of the array.

If the attacker plans the corruption just right, the overflow will reach the location on the stack where the current return address was stored. This can be used to insert into this stack location a pointer to his own code. As a result, when "Return Address" is used by your code, control will pass to the attacker's code. Suddenly his code is executing on your processor, instead of your code.

This is called a stack smashing attack. Please note that it was done in this example without an Internet connection, and without a connection to any external communication line.

Of course, it could have been helpful for our attacker to have the source code for your embedded software--as a disgruntled ex-employee might. But I think a patient and resourceful attacker team could develop this kind of attack even without your source code.

Can you think of an easier way for an attacker to develop an attack on your current project?

< Previous
Page 1 of 4
Next >

Loading comments...

Most Commented

  • Currently no items

Parts Search Datasheets.com

KNOWLEDGE CENTER