Security fundamentals for embedded software

David Kalinsky

March 24, 2012

David Kalinsky


What can be done during coding?
During embedded systems programming, developers can augment software security by avoiding a number of common software security vulnerabilities.

Some of us would say these are bugs. But I'd like to call them vulnerabilities here, to emphasize that some tiny software "defect" that might be too minor even to be called a bug--might be just what an attacker is looking for in order to mount his attack on your embedded system. Small vulnerabilities can open the window to huge attacks.

Vulnerability #1:
Buffer overflow
Far and away, the most widespread security vulnerability in C-language coding is buffer overflow. It could be as simple as writing into element number 256 of a 256-element array.

Compilers don't always identify out-of-bounds buffer access as a software defect. Yet buffer overflow can lead to more serious consequences, such as stack smashing that was discussed earlier, code injection, or even arc injection--by which an attacker changes the control flow of your program by modifying the return address on stack. In arc injection, an attacker doesn't even have to inject any code, and he can jump to an arbitrary function in existing code, or bypass validity checks or assertions.

Here's an example of a buffer overflow attack: An embedded device is required to measure the temperature of water in a swimming pool and to display a histogram showing the percentage of time that the water is at various temperatures. The software developer creates an array of 100 positive integers, each element corresponding to one degree Celsius. Element 0 for 0°C. Element 1 for 1°C, etc. Each time the temperature sensor makes a water temperature measurement, the corresponding element of the array is incremented by 1.

Remember, this is a swimming pool to be used by humans. So the programmer feels safe and secure in designing his temperature array with lots and lots of room beyond the range of water temperature values that a  human body can tolerate.

Until one day, an attacker pulls the temperature sensor out of the water and heats it up using a cigarette lighter. As soon as the sensor measures a value greater than 100°C, the histogram update software corrupts an address in memory beyond the end of the temperature array. If there's data there, the attacker will have corrupted the data. If there's machine code there, the attacker will have corrupted the executable software. In either case, this is a damaging attack. Please note (once again) that it was done without an Internet connection, and without a connection to any external communication line. Just a cigarette lighter.

How can we avoid buffer overflows? This vulnerability is so widespread (and so widely sought-after by attackers), that a multipronged approach is best: prevent, detect, and recover. Prevent buffer overflows by careful input validation: check that a temperature sensor is reporting a value within bounds. In our swimming pool example, explicitly check that it's not reporting a temperature that corresponds to ice or to superheated vapor (> 100°C).

Prevent buffer overflows also by avoiding dangerous library functions (like gets() ) and exercising extra care with others (like memcpy()).

Detect buffer overflows by using the idea of "paint": Extend the buffer slightly at both ends. Fill the extension areas with unusual content I call "paint"; for example, a trap instruction in your processor's machine language. Then check the paint repeatedly at run time. If the paint has been over-written, you've detected a buffer overflow.

Vulnerability #2: Pointer shenanigans
If an attacker can modify a data pointer, then the attacker can point to wherever he likes and write whatever he likes. If an attacker can over-write a function pointer, the attacker is well on his way to executing his code on your processor.

Vulnerability #3: Dynamic memory allocation flaws
It's so easy to write defective code for dynamic memory allocation, that the use of dynamic memory allocation is forbidden in many embedded aerospace and safety-critical systems. Of course, attackers are eager to search out these defects, as they also represent golden opportunities for them to violate the security of an embedded system.

Common flaws include double-freeing, referencing of freed memory, writing to freed memory, zero-length allocations, and buffer overflows (again).

A flaw that is particularly sensitive in embedded software, is neglecting to check the success or failure of a memory allocation request. Some memory allocators will return a zero instead of a pointer to a memory buffer, if they run out of available memory. If application software treats this zero as a pointer, it will then begin writing to what it thinks is a buffer starting at memory address zero.

Many an attacker would be happy to have your software do this. Attackers know that embedded systems tend to be tightly memory constrained. They will try to make a system run out of memory by doing whatever they can to force your memory allocator to allocate more memory than usual--perhaps by leaking memory, possibly leaking it into some code they've injected. They may also try to flood your data-acquisition system with higher than normal volumes of data or higher rates of data--in the hope that the avalanche of data will exhaust your memory capacity. And then … if your software asks for a buffer but neglects to check for allocation failure, it will begin writing a buffer at address zero--trampling upon whatever was there. For example, if your interrupt enable/disable flags happen to be at that address, this could turn off the connection between software and peripheral hardware interfaces. Essentially, this could dis-embed your embedded system.

Vulnerability #4: Tainted data
Data entering an embedded system from the outside world must not be trusted. Instead, it must be "sanitized" before use.

This is true for all kinds of data streams as well as even the simplest of integers. Attackers are on the lookout for extreme values that will produce abnormal effects. In particular they're looking for unexpected values, like situations where a digital microprocessor would give a different result from what a human would calculate using pencil and paper. For example, an integer 'i' happens to have the value 2,147,483,647. If I were to add 1 to this value in a back-of-the-envelope calculation, I'd get +2,147,483,648. But if my microprocessor were to execute i++,  it would get -2,147,483,648 (a large negative number). It wouldn't take long for a clever attacker to leverage this kind of quirk into some kind of havoc in an embedded system.

A useful technique for data sanitization is called white listing. It involves describing all possible valid values for a given piece of data and then writing code that only accepts those values. All unexpected values are viewed as "tainted" and are not used.
< Previous
Page 3 of 4
Next >

Loading comments...

Most Commented

  • Currently no items

Parts Search Datasheets.com

KNOWLEDGE CENTER