Building a secure embedded development process

Editor’s Note: As part of an ongoing series excerpted from their book Embedded Systems Security , David and Mike Kleidermacher describe a much ignored aspect of developing secure software: the underlying software development process and tools that are used.

For critical safety- and security-enforcing components, the software development process must meet a much higher level of assurance than is used for general-purpose components. The embedded systems developer unfamiliar with the secure development process should study proven high-assurance development standards that are used to certify critical embedded systems.

Two noteworthy standards are DO-178B Level A (a standard for ensuring the safety of flight-critical systems in commercial aircraft) and ISO/IEC 15408 (Common Criteria) EAL6/7 or equivalent. A high-assurance development process will cover numerous controls, including configuration management, coding standards, testing, formal design, formal proof of critical functionality, and so on.

Let’s consider a case in which a rogue programmer has the capability of installing into aircraft engine software a condition such that the engine will shut down at a certain time and date. This software may reside in all the engines for a particular class of aircraft. One aspect of a secure development process is having separate system and software teams developing the redundant aircraft engine control systems. In this way, systemic or rogue errors created by one team are mitigated by the entirely distinct development paths.

Independence of systems, software, and testing teams in accordance with standards also contributes to this secure development process.

Change management
An extremely important aspect of maintaining secure software over the long term is to utilize an effective change management regimen.

A software project may be robust and reliable at the time of its first release, only to endure change rot over ensuing years as new features, not part of the original design, are hacked in, causing the code to become difficult to understand, maintain, and test. Time-to-market demands exacerbate the problem, influencing developers to make hasty changes to the detriment of reliability.

Peer reviews
A critical aspect of effective change management is the use of peer code reviews. A common peer code review sequence consists of the code author developing a presentation describing the code change followed by a face-to-face meeting with one or more developers and development managers involved in the project.

The developer presents the software design in question, and the others try to poke holes in the code. These meetings can be extremely painful and time consuming. Audience members sometimes feel compelled to nitpick every line of code to demonstrate their prowess.

Tip: Use asynchronous code reviews with e-mail correspondence or carefully controlled live meetings.

Recording the reviewer’s identification in the configuration management system also provides an electronic paper trail for security certification auditors. Another advantage of partitioning is the ability to minimize process requirements across the system. In any large software project, there is a continuum of criticality among the various pieces of code.

By way of example, let’s consider an excimer laser system used in semiconductor manufacturing. The laser itself is controlled by a highly critical, real-time software application. If this application faults, the laser in turn may fail, destroying the semiconductor.

In addition, the system contains a communications application that uses CORBA over TCP/IP to receive commands and to send diagnostic data over a network. If the communications application fails, then the system may become unavailable or diagnostic data may be lost, but there is no possibility for the laser to malfunction.

If both applications were built into a single, monolithic system in which all code executes in the same memory space, then the entire software content must be developed at the highest levels of quality and reliability. If the applications are partitioned, however, the non-critical communications application development can be subjected to a lower level of rigor, saving time to market and development cost.

Obviously, we do not advocate a free-for-all on components that are not considered critical; management should use judgment regarding which controls to apply to various software teams. When the process controls in non-critical applications are reduced, time to market for the overall system can be improved without jeopardizing reliability where it counts.

Tip: Apply a level of process rigor, including code reviews and other controls, that is commensurate with the criticality level of the component.

Security-oriented peer review
Most peer reviews are spent looking for coding bugs, design flaws, and violations of coding standards. While these activities contribute to more reliable and hence secure software, most embedded software organizations do not perform reviews based specifically on security analysis. When a developer presents a new design or piece of software, the reviewers should consider security-relevant characteristics. For example:

Least privilege: Can the software be refactored such that the least critical components are provided the least amount of privilege in terms of access to resources? Reducing privilege of a component decreases its attack surface and reduces its assurance requirements, improving efficiency in development and certification (if applicable).

Attack potential: Think in terms of an attackerdsystem-resident (malware) or external (network-borne): where are the access points and weaknesses in the system, and how might an attacker attempt to compromise them?

As in poker, success requires putting one’s self in the opponent’s frame of reference and training to think like one’s opponent. Over time, developers with this mindset become proficient at predicting attack potential and therefore can place controls to prevent security failures.

Sophisticated attacks: Even if the code under review is not protecting the power grid, let’s consider advanced security concerns such as side and covert channels, transmission security, and DMA corruption via system peripherals. Developers trained to consider sophisticated attack threats will be better prepared to handle the components that demand high robustness against such threats.

In fact, because peer reviews account for a significant portion of group interaction in a development organization, they are an ideal venue for engendering the kind of vigilance needed to build secure embedded systems.

Tip: By making security a part of peer reviews, management will create a culture of security focus throughout the development team.

Development tool security
An Easter egg is an intentionallyundocumented message, joke, or capability inserted into a program by theprogram’s developers, as an added challenge to the user or simply justfor fun. Easter eggs are commonly found in video games. The Linuxpackaging tool apt-get has this bovine egg:

   > apt-get moo
             (__)
             (oo)
      /——/
     / |||
   *  /—/
      ~~ ~~
   ….“Have you mooed today?”…

Cute.Funny. But what if a developer aims to insert something malicious? Howcan an organization be protected from this insider threat? How can theorganization ensure that malware is not inserted by third-partymiddleware or the compiler used to build the software?

Developersand users require assured bit provenance: confidence that every singlebinary bit of production software originates from its correspondingknown-good version of source code. This is a critical aspect of softwaresecurity that many embedded systems developers never consider.

High-assurancesecurity and safety standards, such as DO-178B Level A (aircraftsafety) and Common Criteria Evaluated Assurance Level 7 (IT security),require the ability to re-create the exact bits of the final productionsoftware from the configuration management system.

Ancillaryitems, not just product source code, must be configuration managed. Forexample, any support files (e.g., scripts) used for the creation ofproduction images must be strictly controlled. And the tool chain usedto build the software must also be covered. Failure to rigorouslycontrol the entire development system can lead to seriousvulnerabilities, both inadvertent and malicious.

Case study: The Thompson Hack
Onesuch subversion was performed by Ken Thompson and reported famously inhis Turing Award acceptance speech. Thompson inserted a back door intoUNIX by subverting the compiler used to build UNIX.

Thompson’smodification caused the UNIX login password verification to match on astring of Thompson’s choosing in addition to the normal databasevalidation. In essence, Thompson changed the UNIX login program thatused to look something like this:

    int login(unsigned int uid, char *password)
   {
     if (strcmp(pass_dbase(uid), password) == 0)
       return true; // password match, login ok
     else
       return false; // password mismatch, login fail
   }

into something that looks like this:

   int login(unsigned int uid, char *password)
  {
    if (strcmp(pass_dbase(uid), password) == 0 ||
        strcmp(“ken_thompson”, password) == 0))
      return true; // password match, login ok
    else
      return false; // password mismatch, login fail
   }

However,changing the UNIX source code would be too easy to detect, so Thompsonmodified the compiler to insert the back door.With compiler insertion,examination of the UNIX source code would not be sufficient to detectthe bug.

The compiler Trojan would be a code fragment thatexamines the internal syntax tree of the compiled program looking forthe specific login password check code sequence and replacing it withthe back door:

  if (!strcmp(function_name(), “login”)) {
    if (OBJ_TYPE(obj) == IF_STATEMENT &&
      OBJ_TYPE(obj->left) == FUNCTION &&
      !strcmp(OBJ_NAME(obj->left), “strcmp”)) {
        Object func = GET_ARG(1, obj->left);
      if (OBJ_TYPE(func) == FUNCTION) &&
          !strcmp(OBJ_NAME(func),”pass_dbase”)) {
        // insert back door
        obj = MAKEOBJ(ORCMP, obj,
          MAKEOBJ(FUNCTION, “strcmp”,
          MAKEOBJ(STRING, “ken_thompson”),
          GET_ARG(2, obj->left);
      }
    }
  }

Ifthe compiler is configuration managed and/or peer reviewed, Thompson’schange might be detected by inspection. But if the compiler source codeis not under configuration management or is very complicated, the Trojancould go unnoticed for some time.

Also, who would think toquestion code committed by the esteemed Ken Thompson? One lesson learnedis that those with the most trust can cause the most damaged anotherargument for enforcing least privilege mentality throughout theengineering department.

Assuming that the Trojan in the compilermight be detected, Thompson took his attack a step further and taughtthe compiler to add the Trojan into itself (two levels of indirection).In other words, the preceding compiler Trojan was inserted not into thesource code of the compiler, but rather into the object code of thecompiler when compiling itself.

The Trojan is now said to beself-reproducing. While this may sound sophisticated, it really is notdifficult once a developer has a basic understanding of how the compilerworks (which of course, Ken Thompson did): simply locate theappropriate compiler phase and insert the preceding code fragment intothe target’s syntax tree when the target is the compiler.

Thereare ways to detect this more advanced attack. Code inspection is againan obvious method. Another approach is to build the compiler from source(under configuration management), build from the same source again withthis new compiler, and require that the two built compiler binaries arebit-for-bit identical. This binary tool comparison method, calledbootstrapping, is a cheap and effective method to detect some classes ofdevelopment tool vulnerabilities.

With Thompson’s Trojan justinserted into the compiler source code, the first binary will notcontain the Trojan code, but the second one will, causing the bootstraptest to fail.

Of course, this approach works only if the compilerand the compiler’s compiler have the same target processor back end,that is, the compiler is self-hosting. Since most UNIX systems haveself-hosting compilers, this generational test is effective.

However,to cover his tracks even further, Thompson removed the compiler’sTrojan source code, leaving only a subverted compiler binary that wasinstalled as the default system compiler. Subsequent bootstrapping testswould fail to detect the subversion since both the first- andsecond-generation compilers contain the Trojan binary code.

Thisattack shows how sophisticated attackers can thwart even gooddevelopment tool security. Ideally, we would like to formally provecorrespondence between a tool’s source code and its resulting compiledobject code. A practical alternative is to require a bootstrap everytime a default tool chain component is replaced. Performing thebootstrap test for every new compiler will generate a chain of trustthat would have prevented Thompson’s subversion if this process had beenin place prior to his attack.

Modified condition/decision coverage (MC/DC)  validation of the UNIX login program would also have detectedthe Thompson hack since the comparison to Thompson’s backdoor passwordstring will never succeed in normal testing. In MC/DC testing, however, agood defense-in-depth strategy should not assume that testing wouldfind all forms of development tool subversion.

Of course, theconfiguration management system must be protected from tampering, eithervia remote network attack or by physical attack of the computers thathouse the configuration system. Some organizations may go as far as torequire that active configuration management databases be kept in asecure vault, accessible only by authorized security personnel.

If an organization is not thinking about development security, now is the time to start.

David Kleidermacher ,Chief Technology Officer of Green Hills Software, joined the company in1991 and is responsible for technology strategy, platform planning, andsolutions design. He is an authority in systems software and security,including secure operating systems, virtualization technology, and theapplication of high robustness security engineering principles to solvecomputing infrastructure problems. Mr. Kleidermacher earned his bachelorof science in computer science from Cornell University.

This article is excerpted from Embedded Systems Security , by David and Mike Kleidermacher, used with permission from Newnes, adivision of Elsevier. Copyright 2012. All rights reserved. For moreinformation on this title and other similar books, please visit www.newnespress.com .

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.