Bullet-proofing your software design - Embedded.com

Bullet-proofing your software design

In August 2003, a rolling blackout affected 10 million people in Ontario and 45 million people in the eastern part of the United States, raising concern that a cyber-attack by a hostile force was underway. Ultimately the causes of the blackout were traced to a slew of system, procedural, and human errors and not an act of aggression. Nevertheless, the event brought home the vulnerability of critical infrastructures connected to the Internet, raising awareness of the need for secure system components that are immune to cyber attack.

This increasing dependence on Internet connectivity demonstrates the growing need to build security into software to protect against currently known and future vulnerabilities. This article will look specifically at the best practices, knowledge, and tools available for building secure software that's free from vulnerabilities.

Secure software
In his book The CERT C Secure Coding Standard , Robert Seacord points out that there is currently no consensus on a definition for the term software security . For the purposes of this article, the definition of secure software will follow that provided by the U.S. Department of Homeland Security (DHS) Software Assurance initiative in “Enhancing the Development Life Cycle to Produce Secure Software: A Reference Guidebook on Software Assurance.” DHS maintains that software, to be considered secure, must exhibit three properties:

1. Dependability– Software that executes predictably and operates correctly under all conditions.

2. Trustworthiness– Software that contains few, if any, exploitable vulnerabilities or weaknesses that can be used to subvert or sabotage the software's dependability.

3. Survivability (also referred to as “Resilience”)–Software that is resilient enough to withstand attack and to recover as quickly as possible, and with as little damage as possible from those attacks that it can neither resist nor tolerate.

The sources of software vulnerabilities are many, including coding errors, configuration errors, and architectural and design flaws. However, most vulnerabilities result from coding errors. In a 2004 review of the National Vulnerabilities Database for their paper “Can Source Code Auditing Software Identify Common Vulnerabilities and Be Used to Evaluate Software Security?” to the 37th International Conference on System Sciences, Jon Heffley and Pascal Meunier found that 64% of the vulnerabilities resulted from programming errors. Given this, it makes sense that the primary objective when writing secure software must be to build security in.

Building security in
Most software development focuses on building high-quality software, but high-quality software is not necessarily secure software. Consider the office, media-playing, or web-browsing software that we all use daily; a quick review of the Mitre Corporation's Common Vulnerabilities and Exposures (CVE) dictionary will reveal that vulnerabilities in these applications are discovered and reported on an almost weekly basis. The reason is that these applications were written to satisfy functional, not security requirements. Testing is used to verify that the software meets each requirement, but security problems can persist even when the functional requirements are satisfied. Indeed, software weaknesses often occur by the unintended functionality of the system.

Building secure software requires adding security concepts to the quality-focused software-development lifecycle so that security is considered a quality attribute of the software under development. Building secure code is all about eliminating known weaknesses (Figure 1 ), including defects, so by necessity secure software is high-quality software.


Click on image to enlarge.

Security must be addressed at all phases of the software development lifecycle, and team members need a common understanding of the security goals for the project and the approach that will be taken to do the work.

The starting point is an understanding of the security risks associated with the domain of the software under development. This is determined by a security risk assessment, a process that ensures the nature and impact of a security breach are assessed prior to deployment in order to identify the security controls necessary to mitigate any identified impact. The identified security controls then become a system requirement.

Adding a security perspective to software requirements ensures that security is included in the definition of system correctness that then permeates the development process. A specific security requirement might validate all user string inputs to ensure that they do not exceed a maximum string length. A more general one might be to withstand a denial of service attack. Whichever end of the spectrum is used, it is crucial that the evaluation criteria are identified for an implementation.

When translating requirements into design, it is prudent to consider security risk mitigation via architectural design. This can be in the choice of implementing technologies or by inclusion of security-oriented features, such as handling untrusted user interactions by validating inputs and/or the system responses by an independent process before they are passed on to the core processes.

The most significant impact on building secure code is the adoption of secure coding practices, including both static and dynamic assurance measures. The biggest bang for the buck stems from the enforcement of secure coding rules via static analysis tools. With the introduction of security concepts into the requirements process, dynamic assurance via security-focused testing is then used to verify that security features have been implemented correctly.

Creating secure code with static analysis
A review of the contents of the CVE dictionary reveals that common software defects are the leading cause of security vulnerabilities. Fortunately, these vulnerabilities can be attributed to common weaknesses in code, and a number of dictionaries have been created to capture this information, such as the Common Weakness Enumeration (CWE) dictionary from the Mitre Corporation and the CERT-C Secure Coding Standard from the Software Engineering Institute at Carnegie Mellon. These secure coding standards can be enforced by the use of static analysis tools, so that even novice secure software developers can benefit from the experience and knowledge encapsulated within the standards.

The use of coding standards to eliminate ambiguities and weaknesses in the code under development has been proven extremely successful in the creation of high-reliability software, such as the use of the Motor Industry Software Reliability Association (MISRA) Guidelines for the use of the C language in critical systems. The same practice can be used to similar effect in the creation of secure software.

Of the common exploitable software vulnerabilities that appear in the CVE dictionary, some occur more than others–user input validation, buffer overflows, improper data types, and improper use of error and exception handling. The CWE and CERT-C dictionaries identify coding weaknesses that can lead to these vulnerabilities. The standards in each of these dictionaries can be enforced by the use of static analysis tools that help to eliminate both known and unknown vulnerabilities while also eliminating latent errors in code. For example, the screenshot in Figure 2 shows the detection of a buffer overflow vulnerability due to improper data types.


Click on image to enlarge.

Static software analysis tools assess the code under analysis without actually executing it. They are particularly adept at identifying coding standard violations. In addition, they can provide a range of metrics that can be used to assess and improve the quality of the code under development, such as the cyclomatic complexity metric that identifies unnecessarily complex software that's difficult to test.

When using static analysis tools for building secure software, the primary objective is to identify potential vulnerabilities in code. Example errors that static analysis tools identify include:

• Insecure functions

• Array overflows

• Array underflows

• Incorrectly used signed and unsigned data types

Since secure code must, by nature, be high-quality code, static analysis tools can be used to bolster the quality of the code under development. The objective here is to ensure that the software under development is easy to verify. The typical validation and verification phase of a project can take up to 60% of the total effort, while coding typically only takes 10%. Eliminating defects via a small increase in the coding effort can significantly reduce the burden of verification, and this is where static analysis can really help.

Ensuring that code never exceeds a maximum complexity value helps to enforce the testability of the code. In addition, static analysis tools identify other issues that affect testability, such as having unreachable or infeasible code paths or an excessive number of loops.

By eliminating security vulnerabilities, identifying latent errors, and ensuring the testability of the code under development, static analysis tools help ensure that the code is of the highest quality and secure against not only current threats but unknown threats as well.

Fitting tools into the process
Tools that automate the process of static analysis and enforcement of coding standards such as CWE or CERT C Secure Coding guidelines ensure that a higher percentage of errors are identified in less time. This rigor is complimented by additional tools for:

Requirements traceability– a good requirements traceability tool is invaluable to the build security in process. Being able to trace requirements from their source through all of the development phases and down to the verification activities and artifacts ensures the highest quality, secure software.

Unit testing– the most effective and cheapest way of ensuring that the code under development meets its security requirements is via unit testing. Creating and maintaining the test cases required for this, however, can be an onerous task. Unit testing tools that assist in the test-case generation, execution and maintenance streamline the unit testing process, easing the unit testing burden and reinforcing unit test accuracy and completeness.

Dynamic analysis– analyses performed while the code is executing provide valuable insight into the code under analysis that goes beyond test-case execution. Structural coverage analysis, one of the more popular dynamic analysis methods, has been proven to be invaluable for ensuring that the verification test cases execute all of the code under development. This helps ensure that there are no hidden vulnerabilities or defects in the code under development.

While these various capabilities can be pieced together from a number of suppliers, some companies offer an integrated tool suite that facilitates the building security in process, providing all of the solutions described above.

Building security
It's not surprising that the processes for building security into software echoes the high-level processes required for building quality into software. Adding security considerations into the process from the requirements phase onwards is the best way of ensuring the development of secure code, as described in Figure 3 . High-quality code is not necessarily secure code, but secure code is always high-quality code.


Click on image to enlarge.

An increased dependence on internet connectivity is driving the demand for more secure software. With the bulk of vulnerabilities being attributable to coding errors, reducing or eliminating exploitable software security weaknesses in new products through the adoption of secure development practices should be achievable within our lifetime.

By leveraging the knowledge and experience encapsulated within the CERT-C Secure Coding Guidelines and CWE dictionary, static analysis tools help make this objective both practical and cost effective. Combine this with the improved productivity and accuracy of requirements traceability, unit testing and dynamic analysis and the elimination of exploitable software weaknesses become inevitable.

Nat Hillary is a field applications engineer with LDRA Technologies, Inc., a position that he comes to via an extensive background in software engineering, sales and marketing. He is an experienced presenter in the use of software analysis solutions for real-time safety critical software who has been invited to participate in a number of international forums and seminars on the topic.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.