Editor’s Note: As part of an ongoing series excerpted from their book Embedded Systems Security , David and Mike Kleidermacher provide an introduction to a set of principles of high assurance software engineering (PHASE) for securing embedded systems.
With increasing software/hardware complexity and network connectivity, malicious security attack threats continue to grow in embedded systems, which are increasingly relied on for consumer safety and security.
The complexity of these systems is driven by the inexorable demand for better capabilities, the digitization of manual and mechanical functions and even more interconnection of our world.
While this grown in electronic content has been beneficial to society, that growth is also a key source of our security woes, because linear growth in hardware/software content creates far more than linear growth in over all complexity, due to an exponential increase in interactions between functions and components.
But complexity breeds flaws and flaws can be exploited to breach system security. It strains traditional reliability techniques, such as code reviews, and implies a growing necessity for a comprehensive approach to software assurance.
Software assurance refers to the level of confidence that the software end user and other relevant stakeholders (e.g., certifiers) have that the security policies and functions claimed by that software are actually fulfilled.
Simply meeting functional requirements does not achieve the assurance required for security critical embedded systems. What it requires is the adoption of a software engineering methodology with the following elements: 1) Minimal implementation,2) Component architecture 3) Least privilege, 4) Secure development process and 5) Independent expert validation. In this article we will provide some additional detail on some of these elements.
It is much harder to create simple, elegant solutions to problems than complex, convoluted ones. But s most software developers do not work in an environment in which producing the absolute minimal possible solution to a problem is an unwavering requirement. Spaghetti code is the source of vulnerabilities that run rampant in software and provide the avenue of exploitation for hackers.
As an example, let’s consider an HTML 1.1-compliant web server. Engineers at Green Hills Software developed a high-assurance web server (HAWS) that used state-driven protocol processing instead of the typical error-prone string parsing and manipulation. The result: a few hundred lines of perfect code instead of the tens of thousands of lines found in many commercial web servers.
The Green Hills web server runs on the high-assurance INTEGRITY operating system. In 2008, a website running on this platform was deployed on the Internet, and Netragard, a leading white hat hacker organization, was invited to perform a vulnerability assessment of the website. Netragard CTO Adriel Desautels reported that the website had “no attack surface whatsoever.”
As another example, let’s consider file systems. Engineers at Green Hills Software developed a high-assurance journaling file system, called PJFS, using a few thousand carefully crafted lines of code. The file system achieves excellent performance, provides guaranteed media storage quotas for clients (important in safety-critical contexts), and employs transactional journaling to assure the integrity of file system data and metadata (and instant reboot time) in the event of sudden power loss. In contrast, commercial journaling file systems typically exceed 100,000 source lines, with plenty of software flaws.
An important software robustness principle is to compose large software systems from small components, each of which is easily maintained by, ideally, a single engineer who understands every single line of code.
It is imperative to use well-defined, documented interfaces between components. These interfaces serve as a contract between component owners and must be created carefully to minimize churn that forces a cascade of implementation, testing, and integration changes. If a modification to an interface is required, component owners whose components use these interfaces must agree, involving common management to resolve disagreements if necessary.
An important corollary to the component architecture principle is that safety and/or security enforcing functionality should be placed into separate components so that critical operations are protected from compromise by non-critical portions of the system.
It is not enough to isolate security functions into their own components, however. Each security-critical component must, to the greatest extent practicable, be designed or refactored to remove any functionality that is not part of its security-enforcing function.
One of the key reasons why overly complex software is difficult to manage is that such a piece of software is almost always worked on by multiple developers, often at different times over the life of the product. Because the software is too complex for a single person to comprehend, features and defect resolutions alike are addressed by guesswork and patchwork. Flaws are often left uncorrected, and new flaws are added while the developer attempts to correct other problems.
Componentization also provides the capability for the system designer to make customerspecific changes in a methodical way. By focusing on customer and market requirements, the designer can make changes by swapping out a small subset of components as opposed to the larger part of the software baseline.
This minimizes the task of regression testing by decreasing the impact to the overall system. When designers keep this attitude of componentization and interface definition in mind, improvements can be made over time with low risk. Componentization provides many benefits, including improved testability, auditability, data isolation, and damage limitation.
Componentization can prevent a failure in one component from devolving into a system failure. Componentization can also dramatically reduce development cost and certification cost, if applicable, by enabling developers to apply a lower development process rigor on noncritical components while raising the level of assurance for the critical pieces, which are often a relatively small percentage of the entire system.
Dividing a system into components requires that they have well-defined interfaces. Instead of modifying the same shared piece of code, developers must define simple, clear interfaces for components and only use a component’s well-documented (or at least well-understood) interface to communicate with other components.
Componentization enables developers to work more independently and therefore more efficiently, minimizing time spent in meetings where developers attempt to explain behavior of their software. Re-factoring a large software project in this manner can be time consuming. However, once this is accomplished, all future development will be more easily managed.Software partitioning Ensure that no single software partition islarger than a single developer can fully comprehend. Each partitionmust have a well-known partition manager. One way to ensure thatdevelopers understand who owns which partitions is to maintain an easilyaccessible partition manager list that is modified only by appropriatemanagement personnel.
The partition manager is the only personauthorized to make modifications to the partition or to give anotherdeveloper the right to make a modification. By having clear ownership ofevery single line of code in the project, developers are not tempted toedit code that they are not appropriately qualified to handle.
Componentmanagers develop, over time, a comprehensive understanding of theirowned partitions, ensuring that future modifications are done withcomplete knowledge of the ramifications of modifying any software withinthe partition.
Runtime Componentization Usually, theembodiment of a component in the target computer system is a singleexecutable program. Examples of components include Windows .EXEapplications and POSIX/UNIX processes.
Thus, complex softwaremade up of multiple components should always be used in conjunction withan operating system that employs memory protection to preventcorruption of one component’s memory space by another partition.Inter-component communication is typically accomplished with standardoperating system message-passing constructs.
Different embeddedoperating systems (and microprocessors) have varying capabilities interms of enforcing strict separation between components. For example, asmall, real-time operating system may not make use of a computer’smemory management unit at all; multiple software applications cannot beprotected from each other, and the operating system itself is at riskfrom flaws in application code. These flat memory model operatingsystems are not suitable for complex, partitioned software systems.
General-purposedesktop operating systems such as Linux and Windows employ basic memoryprotection, in which partitions can be assigned processes that areprotected from corruption by the memory management unit, but do not makehard guarantees about availability of memory or CPU time resources.
Forsecure systems, the embedded operating system must provide strictpartitioning of applications in both time and space. A damagedapplication cannot exhaust system memory, operating system resources, orCPU time because the faulty software is strictly limited to an assignedquota of critical resources.
The quota affects literally allmemory in use, including heap memory for the C/Cþþ runtime, memory usedfor process control blocks and other operating system objects, andprocesses’ runtime stack memory. In addition, the partitioning policiesprovide strict quotas of execution time and strict control over accessto system resources such as I/O devices and files.
A morerigorous partitioning of applications at the operating system levelensures that the benefits of partition management policies used in thedevelopment process are realized during runtime. If possible, use anoperating system that employs true application partitioning.
Processes versus Threads
Whendevelopers factor embedded software into components, a naturalembodiment of the runtime component is the thread. Threads are flows ofexecution that share a single address space with other threads. In mostmodern operating systems, an address space has at least one defaultthread, and the address space with this single thread is often called aprocess.
Because they are easy to create and the way mostembedded programmers learn to employ concurrency, threads often getoverused. Furthermore, embedded systems developers often have themistaken impression that a proliferation of processes will exhaust toomany system resources relative to threads.
While threads arecertainly lighter weight than a full-blown process, the distinction hasbecome increasingly less important in modern embedded systems. Anotherreason for thread overuse can be attributed to the fact that theoriginal real-time operating systems created in the 1980s and early1990s did not support memory-protected processes at all. Developersbecame accustomed to threads, and their legacy lives on.
Contraryto popular belief, designers should strive for a one-to-one ratiobetween threads and processes. In other words, each memory-protectedcomponent should contain a minimum number of threads. The key reason isthat multi-threaded processes are often the cause of subtlesynchronization problems that result in memory corruption, deadlock, andother faults.
The use of virtual memory processes forcesdevelopers to create well-defined inter-process communication interfacesbetween components. Each component can be independent unit tested byexercising these interfaces. This thread-less component philosophyavoids some of the nastiest vulnerabilities that plague embeddedsoftware.
Components must be givenaccess to only those resources (e.g., communication pathways, I/Odevices, system services, information) that are absolutely required.Access control must be mandatory for critical system resources andinformation. Insecure systems typically allow any program to access thefile system, launch other programs, and manipulate system devices.
Forexample, browser buffer overflow vulnerabilities may enable an attackerto access any file because the web browser has the privilege to accessthe entire file system. There is no reason why a web browser should haveunfettered access to the entire file system.
The web browsereither should have write access only to a dedicated, browser-specificdirectory (out of which the user can carefully decide what can leave thesandbox), or all browser write requests can require user approval via adialog box. Read access can be limited to a white list of files knownto be required for browser operation. The web browser’s runtime stackshould not be executable.
Every component of the entire systemshould be designed with least privilege in mind, and it is always bestto start with no privilege and work up to what is needed rather thanstart with the kitchen sink and whittle away privileges.
Foranother example, let’s consider the common operating system function oflaunching a new process. One original UNIX method, fork(), creates aduplicate of the parent process, giving the child all the sameprivileges (e.g., access to file descriptors, memory) as the parent.
Thedeveloper then must close descriptors and otherwise try to limit thechild’s capabilities. This requires an unrealistic prescient knowledgeof all system resources accessible to a process. Thus, errors in the useof this interface have often led to serious vulnerabilities.
In asecure operating system, the default process creation mechanismestablishes a child without access to any of the parent’s capabilitiesfor memory, devices, or other resources.
The creator cansystematically provide capabilities to the child, building up a strictlylimited privilege process. The child must also obtain its physicalmemory resources from its parent, ensuring that a process cannot drainthe system or otherwise affect other critical processes with a forkbomb.
David Kleidermacher , Chief Technology Officer ofGreen Hills Software, joined the company in 1991 and is responsible fortechnology strategy, platform planning, and solutions design. He is anauthority in systems software and security, including secure operatingsystems, virtualization technology, and the application of highrobustness security engineering principles to solve computinginfrastructure problems. Mr. Kleidermacher earned his bachelor ofscience in computer science from Cornell University.
This article is excerpted from Embedded Systems Security, by David and Mike Kleidermacher, used with permission from Newnes, adivision of Elsevier. Copyright 2012. All rights reserved. For moreinformation on this title and other similar books, please visit the Newnes site.