High assurance software engineering improves embedded design security
Software partitioning Ensure that no single software partition is
larger than a single developer can fully comprehend. Each partition
must have a well-known partition manager. One way to ensure that
developers understand who owns which partitions is to maintain an easily
accessible partition manager list that is modified only by appropriate
The partition manager is the only person authorized to make modifications to the partition or to give another developer the right to make a modification. By having clear ownership of every single line of code in the project, developers are not tempted to edit code that they are not appropriately qualified to handle.
Component managers develop, over time, a comprehensive understanding of their owned partitions, ensuring that future modifications are done with complete knowledge of the ramifications of modifying any software within the partition.
Runtime Componentization Usually, the embodiment of a component in the target computer system is a single executable program. Examples of components include Windows .EXE applications and POSIX/UNIX processes.
Thus, complex software made up of multiple components should always be used in conjunction with an operating system that employs memory protection to prevent corruption of one component’s memory space by another partition. Inter-component communication is typically accomplished with standard operating system message-passing constructs.
Different embedded operating systems (and microprocessors) have varying capabilities in terms of enforcing strict separation between components. For example, a small, real-time operating system may not make use of a computer’s memory management unit at all; multiple software applications cannot be protected from each other, and the operating system itself is at risk from flaws in application code. These flat memory model operating systems are not suitable for complex, partitioned software systems.
General-purpose desktop operating systems such as Linux and Windows employ basic memory protection, in which partitions can be assigned processes that are protected from corruption by the memory management unit, but do not make hard guarantees about availability of memory or CPU time resources.
For secure systems, the embedded operating system must provide strict partitioning of applications in both time and space. A damaged application cannot exhaust system memory, operating system resources, or CPU time because the faulty software is strictly limited to an assigned quota of critical resources.
The quota affects literally all memory in use, including heap memory for the C/Cþþ runtime, memory used for process control blocks and other operating system objects, and processes’ runtime stack memory. In addition, the partitioning policies provide strict quotas of execution time and strict control over access to system resources such as I/O devices and files.
A more rigorous partitioning of applications at the operating system level ensures that the benefits of partition management policies used in the development process are realized during runtime. If possible, use an operating system that employs true application partitioning.
Processes versus Threads
When developers factor embedded software into components, a natural embodiment of the runtime component is the thread. Threads are flows of execution that share a single address space with other threads. In most modern operating systems, an address space has at least one default thread, and the address space with this single thread is often called a process.
Because they are easy to create and the way most embedded programmers learn to employ concurrency, threads often get overused. Furthermore, embedded systems developers often have the mistaken impression that a proliferation of processes will exhaust too many system resources relative to threads.
While threads are certainly lighter weight than a full-blown process, the distinction has become increasingly less important in modern embedded systems. Another reason for thread overuse can be attributed to the fact that the original real-time operating systems created in the 1980s and early 1990s did not support memory-protected processes at all. Developers became accustomed to threads, and their legacy lives on.
Contrary to popular belief, designers should strive for a one-to-one ratio between threads and processes. In other words, each memory-protected component should contain a minimum number of threads. The key reason is that multi-threaded processes are often the cause of subtle synchronization problems that result in memory corruption, deadlock, and other faults.
The use of virtual memory processes forces developers to create well-defined inter-process communication interfaces between components. Each component can be independent unit tested by exercising these interfaces. This thread-less component philosophy avoids some of the nastiest vulnerabilities that plague embedded software.
Components must be given access to only those resources (e.g., communication pathways, I/O devices, system services, information) that are absolutely required. Access control must be mandatory for critical system resources and information. Insecure systems typically allow any program to access the file system, launch other programs, and manipulate system devices.
For example, browser buffer overflow vulnerabilities may enable an attacker to access any file because the web browser has the privilege to access the entire file system. There is no reason why a web browser should have unfettered access to the entire file system.
The web browser either should have write access only to a dedicated, browser-specific directory (out of which the user can carefully decide what can leave the sandbox), or all browser write requests can require user approval via a dialog box. Read access can be limited to a white list of files known to be required for browser operation. The web browser’s runtime stack should not be executable.
Every component of the entire system should be designed with least privilege in mind, and it is always best to start with no privilege and work up to what is needed rather than start with the kitchen sink and whittle away privileges.
For another example, let’s consider the common operating system function of launching a new process. One original UNIX method, fork(), creates a duplicate of the parent process, giving the child all the same privileges (e.g., access to file descriptors, memory) as the parent.
The developer then must close descriptors and otherwise try to limit the child’s capabilities. This requires an unrealistic prescient knowledge of all system resources accessible to a process. Thus, errors in the use of this interface have often led to serious vulnerabilities.
In a secure operating system, the default process creation mechanism establishes a child without access to any of the parent’s capabilities for memory, devices, or other resources.
The creator can systematically provide capabilities to the child, building up a strictly limited privilege process. The child must also obtain its physical memory resources from its parent, ensuring that a process cannot drain the system or otherwise affect other critical processes with a fork bomb.
David Kleidermacher, Chief Technology Officer of Green Hills Software, joined the company in 1991 and is responsible for technology strategy, platform planning, and solutions design. He is an authority in systems software and security, including secure operating systems, virtualization technology, and the application of high robustness security engineering principles to solve computing infrastructure problems. Mr. Kleidermacher earned his bachelor of science in computer science from Cornell University.
This article is excerpted from Embedded Systems Security, by David and Mike Kleidermacher, used with permission from Newnes, a division of Elsevier. Copyright 2012. All rights reserved. For more information on this title and other similar books, please visit the Newnes site.