Firmware Security - Preventing memory corruption and injection attacks -

Firmware Security – Preventing memory corruption and injection attacks


Editor’s Note: Connected devices that form the backbone of the internet of things (IoT) present multiple vulnerabilities for penetration by hackers. To mitigate those threats to the underlying firmware in those devices, developers need to be familiar with a wide number of security techniques. Taken from the book, IoT Penetration Testing Cookbook, by Aaron Guzman and Aditya Gupta., this series of articles walks developers through best practices for firmware protection.

In this first installment of the series, the authors discuss mechanisms for preventing memory-corruption vulnerabilities and injection attacks in firmware.

Adapted from IoT Penetration Testing Cookbook, by Aaron Guzman and Aditya Gupta.

Chapter 8. Firmware Security Best Practices
By Aaron Guzman and Aditya Gupta

In this chapter, we will cover the following recipes:

  • Preventing memory-corruption vulnerabilities
  • Preventing injection attacks
  • Securing firmware updates
  • Securing sensitive information
  • Hardening embedded frameworks
  • Securing third-party code and components


Embedded software is the core of all that is considered IoT, although embedded application security is often not thought of as a high priority for embedded developers and IoT device makers. This may be due to the lack of secure coding knowledge or other challenges outside of a team’s code base. Other challenges developers face may include, but are not limited to, the Original Design Manufacturer(ODM) supply chain, limited memory, a small stack, and the challenge of pushing firmware updates securely to an endpoint. This chapter provides practical best practice guidance developers can incorporate in embedded firmware applications. As per OWASP’s Embedded Application Security project (, embedded best practices consist of:

  • Buffer and stack overflow protection
  • Injection attack prevention
  • Securing firmware updates
  • Securing sensitive information
  • Identity management controls
  • Embedded framework and C-based toolchain hardening
  • Usage of debugging code and interfaces
  • Securing device communications
  • Usage of data collection and storage
  • Securing third-party code
  • Threat modeling

This chapter will address several of the preceding mentioned best practices mostly tailored towards a POSIX environment, however the principles are designed to be platform agnostic.

Preventing memory-corruption vulnerabilities

While using lower level languages such as C, there is a high chance of memory corruption bugs arising if bounds are not properly checked and validated by developers programmatically. Preventing the use of known dangerous functions and APIs aids against memory-corruption vulnerabilities within firmware. For example, a non-exhaustive list of known, unsafe C functions consists of: strcat, strcpy, sprintf, scanf, and gets.

Common memory-corruption vulnerabilities such as buffer overflows or heap overflows can consist of overflowing the stack or the heap. The impact of these specific memory- corruption vulnerabilities when exploited differ per the operating system platform. For example, commercial RTOS platforms such as QNX Neutrino isolates each process and its stack from the filesystem minimizing the attack surface. However, for common Embedded Linux distributions this may not be the case. Buffer overflows in Embedded Linux may result in arbitrary execution of malicious code and modification to the operating system by an attacker. In this recipe, we will show how tools can help with detecting vulnerable C functions and also provide security controls along with best practices for preventing memory corruption vulnerabilities.

Getting ready

For this recipe, the following tool will be used:

  • Flawfinder: Flawfinder is a free C/C++ static code analysis tool that reports potential security vulnerabilities.

How to do it…

Common Linux utilities are helpful to search through C/C++ code files. Although, there are commercially available source code analysis tools available that do a much better job than common utilities to prevent from memory corruption vulnerabilities with IDE plugins developers can use. For demonstration purposes, we will show how to search through code files for a list of predefined function vulnerable calls and rules with grep as well as flawfinder in the following steps.

  1. To discover unsafe C functions, there are several methods that can be used. The simplest form is using a grepexpression similar to the example shown as follows:
$ grep -E '(strcpy|strcat|sprintf|strlen|memcpy|fopen|gets)' code.c

This expression can be tweaked to be more intelligent or wrapped in a script that can be executed per build or on an ad-hoc basis.

  1. Alternatively, free tools such as flawfinder can be used to search for vulnerable functions by calling flawfinder and the path to the piece of code as shown in the following example:
$ flawfinder fuzzgoat.c
Flawfinder version 1.31, (C) 2001-2014 David A. Wheeler. 
Number of rules (primarily dangerous function names) in C/C++ 
ruleset: 169
Examining fuzzgoat.c 
fuzzgoat.c:1049: [4] (buffer) strcpy:
Does not check for buffer overflows when copying to destination (CWE-120).
Consider using strcpy_s, strncpy, or strlcpy (warning, strncpy is easily misused).
    fuzzgoat.c:368: [2] (buffer) memcpy:
    Does not check for buffer overflows when copying to destination (CWE-120).
    Make sure destination can always hold the source data. 
fuzzgoat.c:401: [2] (buffer) sprintf:
    Does not check for buffer overflows (CWE-120). Use sprintf_s, snprintf, or vsnprintf. Risk is low because the source has a constant maximum length.
fuzzgoat.c:1036: [2] (buffer) strcpy:
    Does not check for buffer overflows when copying to destination (CWE-120).
    Consider using strcpy_s, strncpy, or strlcpy (warning, strncpy is easily misused). Risk is low because the source is a constant string.
fuzzgoat.c:1041: [2] (buffer) sprintf:
    Does not check for buffer overflows (CWE-120). Use sprintf_s, snprintf, or vsnprintf. Risk is low because the source has a constant maximum length.
fuzzgoat.c:1051: [2] (buffer) strcpy:
    Does not check for buffer overflows when copying to destination (CWE-120).
    Consider using strcpy_s, strncpy, or strlcpy (warning, strncpy is easily misused). Risk is low because the source is a constant string.
Hits = 24
Lines analyzed = 1082 in approximately 0.02 seconds (59316 lines/second)
Physical Source Lines of Code (SLOC) = 765 Hits@level = [0] 0 [1] 0 [2] 23 [3] 0 [4] 1 [5] 0
Hits@level+ = [0+] 24 [1+] 24 [2+] 24 [3+] 1 [4+] 1 [5+] 0
Hits/KSLOC@level+ = [0+] 31.3725 [1+] 31.3725 [2+] 31.3725 [3+]
1.30719 [4+] 1.30719 [5+] 0
Minimum risk level = 1
Not every hit is necessarily a security vulnerability.
There may be other security vulnerabilities; review your code! 
See 'Secure Programming for Linux and Unix HOWTO' ( for more information.
  1. Upon discovery of vulnerable C functions in use, you must incorporate safe alternatives. For example, the following vulnerable code uses the unsafe gets() function that does not check buffer lengths:
int main () {
    char userid[8];
    int allow = 0;
    printf external link("Enter your userID, please: ");
    if (grantAccess(userid)) {
        allow = 1;
    if (allow != 0) { 
    return 0;
  1. The userid can be overrun using any number of characters over 8 such as the Buffer Overflow Exploit (BoF) payload with custom execution functions. To mitigate overrunning the buffer, the fgets() function can be used as a safe alternative. The following example code shows how to securely use fgets() and allocate memory correctly:
#include <stdio.h>
#include <stdlib.h>
#define LENGTH 8
int main () {
   char* userid, *nlptr;
   int allow = 0;

   userid = malloc(LENGTH * sizeof(*userid));
   if (!userid)
       return EXIT_FAILURE;
   printf external link("Enter your userid, please: ");
   fgets(userid,LENGTH, stdin);
   nlptr = strchr(userid, '\n');
   if (nlptr) *nlptr = '\0';

   if (grantAccess(userid)) {
       allow = 1;
   if (allow != 0) {


   return 0;

The same mitigations can be used with other safe alternative functions such as snprintf(), strlcpy(), and strlcat(). Depending on the operating system platform, some of the safe alternatives may not be available. It is important to perform your own research to determine safe alternatives for your specific architecture and platform. Intel has created an open source cross-platform library called safestringlib to prevent the use of these insecure banned functions; use an alternative safe replacement function. For more details on safestringlib, visit the GitHub page at:

Other memory security controls can be used to prevent from memory-corruption vulnerabilities such as the following:

  • Make use of secure compiler flags such as -fPIE, -fstack-protector-all, -Wl,- z,noexecstack, -Wl,-z,noexecheap and others that may depend on your specific compiler version.
  • Prefer system-on-chips (SoC) and microcontrollers (MCU) that contain memory management units (MMU). MMUs isolate threads and processes to lessen the attack surface if a memory bug is exploited.
  • Prefer system-on-chips (SoC) and microcontrollers (MCU) that contain memory protection units (MPU). MPUs enforce access rules for memory and separate processes as well as enforce privilege rules.
  • If no MMU or MPU is available, monitor the stack using a known bit to monitor how much the stack is being consumed by determining how much of the stack no longer contains the known bit.
  • Be mindful what is being placed in buffers and free buffer locations after-use.

Exploiting memory vulnerabilities with address space layout randomization (ASLR) and other stack controls does take a lot of effort for attackers to exploit. Although, it is still possible under certain circumstances. Ensuring code is resilient and incorporates a defense- in-depth approach for data placed in memory will help the secure posture of the embedded device.

See also

Preventing injection attacks

Injection attacks are one of the top vulnerabilities in any web application but especially in IoT systems. In fact, injection has been rated in the top 2 of the OWASP Top 10 since 2010. There are many types of injection attacks such as operating system (OS) command injection, cross-site scripting (for example, JavaScript injection), SQL injection, log injection, as well as others such as expression language injection. In IoT and embedded systems, the most common types of injection attacks are OS command injection; when an application accepts an untrusted user input and passes that value to perform a shell command without input validation or proper escaping and cross-site scripting (XSS). This recipe will show you how to mitigate command injection attacks by ensuring all untrusted data and user input is validated, sanitized, and alternative safe functions are used.

How to do it…

Command injection vulnerabilities are not difficult to test for statics and dynamics when an IoT device is running. Firmware can call system(), exec() and similar variants to execute OS commands, or call an external script that runs OS calls from interpreted languages such as Lua. Command injection vulnerabilities can arise from buffer overflows as well. The following steps and examples show code vulnerable to command injection as well as how to mitigate from a command injection. Afterwards, we will list common security controls to prevent common injection attacks.

  1. The following snippet of code invokes the dangerous system() C function to remove the .cfg file in the home In the event an attacker has the ability to control the function, subsequent shell commands may be concatenated to perform unauthorized actions. Additionally, an attacker can manipulate environment variables to delete any files ending in .cfg:
#include <stdlib.h>

void func(void) {
  system("rm ~/.cfg");
  1. To mitigate the preceding vulnerable code, the unlink() function will be used instead of the system() The unlink() function is not susceptible to symlink and command injection attacks. The unlink() function removes the symlink and does not affect files or directories named by the contents of the symlink. This reduces the susceptibility of the unlink() function to symlink attacks, however it does not thwart symlink attacks in their entirety; if a named directory is the same, it could also be deleted. The unlink() function does thwart from command injection attacks and similar contextual functions should be used rather than executing operating system calls:
#include <pwd.h>
#include <unistd.h>
#include <string.h>
#include <stdlib.h>
#include <stdio.h>
void func(void) {
const char *file_format = "%s/.cfg";
size_t len;
char *pathname;
struct passwd *pwd;

pwd = getpwuid(getuid());
if (pwd == NULL) {
   /* Handle error */

len = strlen(pwd->pw_dir) + strlen(file_format) + 1;
pathname = (char *)malloc(len);
if (NULL == pathname) {
   /* Handle error */
int r = snprintf(pathname, len, file_format, pwd->pw_dir);
if (r < 0 || r >= len) {
   /* Handle error */
if (unlink(pathname) != 0) {
   /* Handle error */


There are several other methods to mitigate from injection attacks. Below are a list of common best practices and controls for preventing injection attacks:

  • Avoid invoking OS calls directly if possible.
  • If needed, whitelist accepted commands and validate the input values prior to execution.
  • Use lookup maps of numbers-to-command-strings for user driven strings that may be passed to the operating system such as {1:ping -c 5}.
  • Perform static code analysis on code bases and alert when languages us OS commands such as os.system().
  • Consider all user input as untrusted and output encode characters for data rendered back to the user. (for example, Convert & to &amp, Convert < to &lt, Convert > to &gt, and so on.)
  • For XSS, use HTTP response headers such as X-XSS-Protection and Content- Security-Policy with the appropriate directives configured.
  • Ensure debug interfaces with command execution are disabled on production firmware builds (for example,

The preceding mentioned controls always require testing prior to firmware being used in a production environment. With injection attacks, devices and users are put at risk of being taken over by attackers as well as rouge devices. We are seeing such events happening in 2017 with IoT Reaper and Persirai botnets. This is only the beginning.

See also

Reprinted with permission from Packt Publishing. Copyright © 2017 Packt Publishing

Aaron Guzman is a principal security consultant from the Los Angeles area with expertise in web app security, mobile app security, and embedded security. He has shared his security research at a number of worldwide conferences and is a chapter leader for the Open Web Application Security Project (OWASP) Los Angeles chapter and the Cloud Security Alliance SoCal (CSA SoCal) chapter. He has contributed to many IoT security guidance publications from CSA, OWASP, PRPL, and a number of others. Aaron leads the OWASP Embedded Application Security project, providing practical guidance to address the most common firmware security bugs for the embedded and IoT community. 

Aditya Gupta is the founder of Attify, and an IoT and mobile security researcher. He is also the creator of the popular training course Offensive IoT Exploitation, and the founder of the online store for hackers Attify-Store. Gupta has also published security research papers, authored tools, and has spoken at numerous conferences. In his previous roles, he has worked with various organizations helping to build their security infrastructure and internal automation tools, identify vulnerabilities in web and mobile applications, and lead security planning. 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.