A number of broad mitigation strategies that do not target a specificclass of vulnerabilities or exploits can be applied to improve theoverall security of a deployed application.
This series of articles integrates information about such mitigationstrategies, techniques, and tools that assist in developing anddeploying secure software in C and C++ (and other languages). Differentmitigations are often best applied by individuals acting in thedifferent software development roles-programmers; project managers,testers, and so forth.
Mitigations may apply to a singular individual (such as personalcoding habits) or to decisions that apply to the entire team (such asdevelopment environment settings). As a result, some of the mitigationstrategies described directly involve developers, while the effects ofother mitigation strategies on developers are more indirect.
Figure 8-1 below shows the mitigation strategies that aredescribed in this series in the context of a generic softwaredevelopment life cycle. Activities that span development phases arelisted in the software development life cycle box. Postdeploymentactivities include software evolution and maintenance, as well asoperational security.
|Figure8-1. Software development life cycle|
A generic software development life cycle was selected becausedifferent software development groups seldom follow the samedevelopment process. Organizational processes may be based on theRational Unified Process (RUP), eXtreme programming (XP), or even thewaterfall process.
These processes, however, are almost always customized to fit theneeds of the software development group. An example of an organizationthat has adopted a customized process for the development of softwarethat needs to withstand malicious attack is Microsoft.
The trustworthy computing security development lifecycle (or SDL)adds a series of security-focused activities and deliverables to eachphase of Microsoft's software development process [Lipner 051].
As a result, software development practices that are meant to reduceor eliminate vulnerabilities in system development must be processagnostic, that is, capable of being integrated into a broad variety ofexisting processes.
The generic software development life cycle shown in Figure 8-1above consists of requirements, architecture and design,implementation, testing, and postdeployment phases. These phases mayoccur repeatedly, in different order, or with a greater or lesseremphasis depending on the software development process followed, butthe software development activities associated with these phases mustoccur at some point in the life cycle.Secure Software Development Principles
While principles alone are insufficient for secure softwaredevelopment, principles can help guide secure software developmentpractices. Some of the earliest secure software development principleswere proposed by Saltzer in 1974 and revised by him in 1975 [Saltzer74, Saltzer 75]. These eight principles apply today as well and arerepeated verbatim here.
1. Economy ofmechanism. Keep the design as simple and small as possible.
2. Fail-safedefaults. Base access decisions on permission rather thanexclusion.
3. Completemediation. Every access to every object must be checked forauthority.
4. Open design. The design should not be secret.
5. Separationof privilege. Where feasible, a protection mechanism thatrequires two keys to unlock it is more robust and flexible than onethat allows access to the presenter of only a single key.
6. Leastprivilege. Every program and every user of the system shouldoperate using the least set of privileges necessary to complete thejob.
7. Least commonmechanism. Minimize the amount of mechanisms common to more thanone user and depended on by all users.
8.Psychological acceptability. It is essential that the humaninterface be designed for ease of use, so that users routinely andautomatically apply the protection mechanisms correctly.
While subsequent work has built on these basic security principles,the essence remains the same. The result is that these principles havewithstood the test of time.
Economy of Mechanism
This is a well-known principle that applies to all aspects of a systemand software design, and it is particularly relevant to security.Security mechanisms, in particular, should be relatively small andsimple so that they can be easily implemented and verified (forexample, a security kernel).
Complex designs increase the likelihood that errors will be made intheir implementation, configuration, and use. Additionally, the effortrequired to achieve an appropriate level of assurance increasesdramatically as security mechanisms become more complex. As a result,it is generally more cost-effective to spend more effort in the designof the system to achieve a simple solution to the problem.
Basing access decisions on permission rather than exclusion means that,by default, access is denied and the protection scheme identifiesconditions under which access is permitted. If the mechanism fails togrant access, this situation is easily detected and corrected.
However, a failure to block access may fail by allowing access-whichmay go unnoticed in normal use.
|Figure8-2. The complete mediation problem|
The complete mediation problem is illustrated in Figure 8-2. Requiringthat access to every object must be checked for authority is theprimary underpinning of a protection system. It requires that thesource of every request is positively identified and authorized toaccess a resource.Open Design
A secure design should not depend on the ignorance of potentialattackers or obscurity of code. For example, encryption systems andaccess control mechanisms should be able to be placed under open reviewand still be secure.
This is typically achieved by decoupling of the protection mechanismfrom protection keys or passwords. It has the added advantage ofpermitting thorough examination of the mechanism without concern thatreviewers can compromise the safeguards.
Open design is necessary because all code is open to inspection by apotential attacker using decompilation techniques or by examining thebinaries. As a result, any protection scheme based on obfuscation willeventually be revealed. Implementing an open design also allows usersto verify that the protection scheme is adequate for their particularapplication.
Separation of Privilege
Separation of privilege eliminates a single point of failure byrequiring more than one condition to grant permissions. Two-factorauthentication schemes are examples of the use of privilege separation:something you have and something you know. A security token andpassword-based access scheme, for example, has the following properties(assuming a correct implementation):
1) A user could have a weakpassword or even disclose it, but without the token the access schemewill not fail.
2) A user could lose his orher token or have it stolen by an attacker, but without the passwordthe access scheme will not fail.
3) Only if the token and thepassword come into the possession of an attacker will the mechanismfail.
Separation of privilege is often confused with the design of aprogram Consisting of subsystems based on required privileges. Thisapproach allows a designer to apply a finer grained application ofleast privilege.
When a vulnerable program is exploited, the exploit code runs with theprivileges that the program has at that time. In the normal course ofoperations, most systems need to allow users or programs to execute alimited set of operations or commands with elevated privileges.
An often-used example of this is a password-changing program.; usersmust be able to modify their own passwords, but must not be given freeaccess to read or modify the database containing all user passwords.
Therefore, the password-changing program must correctly accept inputfrom the user and ensure that, based on additional authorizationchecks, only the entry for that user is changed. Programs such as thesemay introduce vulnerabilities if the programmer does not exercise carein program sections critical to security.
The least privilege principle suggests that processes should executewith the minimum permission required to perform secure operations, andany elevated permission should be held for a minimum time.
This approach reduces the opportunities an attacker has to executearbitrary code with elevated privileges. This principle can beimplemented in the following ways:
1) Grant each system,subsystem, and component the fewest privileges with which it canoperate.
2) Acquire and discardprivileges such that, at any given point, the system only has theprivileges it needs for the task in which it is engaged.
3) Discard the privilege tochange privileges if no further changes are required.
4) Design programs to useprivileges early, ideally before interacting with a potential adversary(for example, a user), and then discard them for the remainder of theprogram.The effectiveness of least privilege depends on the security model ofthe operating environment. Fine-grained control allows a programmer torequest the permissions required to perform an operation withoutacquiring extraneous permissions that might be exploited.
Security models that allow permissions to be acquired and dropped asnecessary allow programmers to reduce the window of opportunity for anexploit to successfully gain elevated privileges.
Of course, there are other trade-offs that trust be considered. Manysecurity models require the user to authorize elevated privileges.Without this feature, there would be nothing to prevent an exploit fromreasserting permissions once it gained control. However, interactionwith the user must be considered when designing which permissions areneeded when.
Other security models may allow for permissions to be permanentlydropped, for example, once they have been used to initialize requiredresources. Permanently dropping permissions may be more effective, forexample, in cases where the process is running unattended.
Least Common Mechanism
Least common mechanism is a principle that, in some ways, conflictswith overall trends in distributed computing. The least commonmechanism principle dictates that mechanisms common to more than oneuser should be minimized because these mechanisms represent potentialsecurity risks.
If an adversarial_ user manages to breach the security of one ofthese shared mechanisms, the attacker may be able to access or modifydata from other users, possibly introducing malicious code intoprocesses that depend on the resource. This principle seeminglycontradicts a trend in which distributed objects are used to provide ashared repository for common data elements.
Your solution to this problem may differ depending on your relativepriorities. However, if you are designing an application in which eachinstance of the application has its own data store and data is notshared between multiple instances of the application or betweenmultiple clients or objects in a distributed object system, considerdesigning your system so that the mechanists executes in the processspace of your program and is not shared with other applications.
The modern term for this principle is “usability”; it is anotherquality attribute that is often traded off with security. However,usability is also a form of security because user errors can often leadto security breaches (when setting or changing access controls, forexample).
Many of the vulnerabilities in the US CERT Vulnerability Databasecanbe attributed to usability problems. After buffer overflows, the secondmost common class of vulnerabilities identified in this database is”default configuration after installation is insecure.” Other commonusability issues at the root cause of vulnerabilities cataloged in thedatabase include:
1) Program is hard toconfigure safely or is easy to misconfigure.
2) Installation procedurecreates vulnerability in other programs (for example, by modifyingpermissions)
3) Configuration problems
4) Confusing error and confirmation messages
Usability problems in documentation can also lead to real-worldvulnerabilities-including insecure examples or incorrect descriptions.Overall, there are many good reasons to develop usable systems andperform usability testing. Security happens to be one of these reasons.
(Editor's note: For more onembedded security, check out the cover story in the
Next in Part 2 of a series: Systemsquality requirements, threat modeling and use/misues cases.
Thisarticle is excerpted with permission from the book titled “
Robert Seacord is senior vulnerability analyst with the
[Lipner 05] Lipner S. and M. Howard. “
[Saltzer 74] Saltzer, J.H. “
[Saltzer 75] Saltzer, J.H. and M.D. Schroeder.
Recent articles on security onEmbedded.com:
Stateof security technology: embedded to enterprise
Securingmobile and embedded devices: encryptioon is not security
Guidelinesfor designing secure PCI PED EFT terminals
Overcomingsecurity issues in embedded systems
Buildingmiddleware for security and safety-critical applications
Securityconsiderations for embedded operating systems
Diversityprotects embedded systems
Aproactive strategy for eliminating embedded software vulnerabilities
Understandingelliptic curve cryptography
Securingad hoc embedded wireless networks with public key cryptography
Securingembedded systems for networks
Implementingsolid security on a Bluetooth product
Smartsecurity improves battery life
Howto establish mobile security
Ensuringstrong security for mobile transactions
Securingan 802.11 network