Building a secure embedded development processEditor’s Note: As part of an ongoing series excerpted from their book Embedded Systems Security, David and Mike Kleidermacher describe the underlying software development process and tools for developing secure software.
For critical safety- and security-enforcing components, the software development process must meet a much higher level of assurance than is used for general-purpose components. The embedded systems developer unfamiliar with the secure development process should study proven high-assurance development standards that are used to certify critical embedded systems.
Two noteworthy standards are DO-178B Level A (a standard for ensuring the safety of flight-critical systems in commercial aircraft) and ISO/IEC 15408 (Common Criteria) EAL6/7 or equivalent. A high-assurance development process will cover numerous controls, including configuration management, coding standards, testing, formal design, formal proof of critical functionality, and so on.
Let’s consider a case in which a rogue programmer has the capability of installing into aircraft engine software a condition such that the engine will shut down at a certain time and date. This software may reside in all the engines for a particular class of aircraft. One aspect of a secure development process is having separate system and software teams developing the redundant aircraft engine control systems. In this way, systemic or rogue errors created by one team are mitigated by the entirely distinct development paths.
Independence of systems, software, and testing teams in accordance with standards also contributes to this secure development process.
An extremely important aspect of maintaining secure software over the long term is to utilize an effective change management regimen.
A software project may be robust and reliable at the time of its first release, only to endure change rot over ensuing years as new features, not part of the original design, are hacked in, causing the code to become difficult to understand, maintain, and test. Time-to-market demands exacerbate the problem, influencing developers to make hasty changes to the detriment of reliability.
A critical aspect of effective change management is the use of peer code reviews. A common peer code review sequence consists of the code author developing a presentation describing the code change followed by a face-to-face meeting with one or more developers and development managers involved in the project.
The developer presents the software design in question, and the others try to poke holes in the code. These meetings can be extremely painful and time consuming. Audience members sometimes feel compelled to nitpick every line of code to demonstrate their prowess.
Tip: Use asynchronous code reviews with e-mail correspondence or carefully controlled live meetings.
Recording the reviewer’s identification in the configuration management system also provides an electronic paper trail for security certification auditors. Another advantage of partitioning is the ability to minimize process requirements across the system. In any large software project, there is a continuum of criticality among the various pieces of code.
By way of example, let’s consider an excimer laser system used in semiconductor manufacturing. The laser itself is controlled by a highly critical, real-time software application. If this application faults, the laser in turn may fail, destroying the semiconductor.
In addition, the system contains a communications application that uses CORBA over TCP/IP to receive commands and to send diagnostic data over a network. If the communications application fails, then the system may become unavailable or diagnostic data may be lost, but there is no possibility for the laser to malfunction.
If both applications were built into a single, monolithic system in which all code executes in the same memory space, then the entire software content must be developed at the highest levels of quality and reliability. If the applications are partitioned, however, the non-critical communications application development can be subjected to a lower level of rigor, saving time to market and development cost.
Obviously, we do not advocate a free-for-all on components that are not considered critical; management should use judgment regarding which controls to apply to various software teams. When the process controls in non-critical applications are reduced, time to market for the overall system can be improved without jeopardizing reliability where it counts.
Tip: Apply a level of process rigor, including code reviews and other controls, that is commensurate with the criticality level of the component.
Security-Oriented Peer Review
Most peer reviews are spent looking for coding bugs, design flaws, and violations of coding standards. While these activities contribute to more reliable and hence secure software, most embedded software organizations do not perform reviews based specifically on security analysis. When a developer presents a new design or piece of software, the reviewers should consider security-relevant characteristics. For example:
Least privilege: Can the software be refactored such that the least critical components are provided the least amount of privilege in terms of access to resources? Reducing privilege of a component decreases its attack surface and reduces its assurance requirements, improving efficiency in development and certification (if applicable).
Attack potential: Think in terms of an attackerdsystem-resident (malware) or external (network-borne): where are the access points and weaknesses in the system, and how might an attacker attempt to compromise them?
As in poker, success requires putting one’s self in the opponent’s frame of reference and training to think like one’s opponent. Over time, developers with this mindset become proficient at predicting attack potential and therefore can place controls to prevent security failures.
Sophisticated attacks: Even if the code under review is not protecting the power grid, let’s consider advanced security concerns such as side and covert channels, transmission security, and DMA corruption via system peripherals. Developers trained to consider sophisticated attack threats will be better prepared to handle the components that demand high robustness against such threats.
In fact, because peer reviews account for a significant portion of group interaction in a development organization, they are an ideal venue for engendering the kind of vigilance needed to build secure embedded systems.
Tip: By making security a part of peer reviews, management will create a culture of security focus throughout the development team.
Development Tool Security
An Easter egg is an intentionally undocumented message, joke, or capability inserted into a program by the program’s developers, as an added challenge to the user or simply just for fun. Easter eggs are commonly found in video games. The Linux packaging tool apt-get has this bovine egg:
Cute. Funny. But what if a developer aims to insert something malicious? How can an organization be protected from this insider threat? How can the organization ensure that malware is not inserted by third-party middleware or the compiler used to build the software?
Page 1 of 2Next >
Currently no items