How to write secure C/C++ application code for your embedded design: Part 4 -

How to write secure C/C++ application code for your embedded design: Part 4

Manually inspecting source code is important, but it is labor intensiveand error prone. Source code audits can be supplemented by staticanalysis tools that scan source code for security flaws.

Static analysis tools can find important defects, includingpotential vulnerabilities, but they cannot find all possible defects,so they cannot be viewed as a cure-all.

The biggest problem with static analysis tools is their tendency toreturn false positives (that is, emit warnings that are not useful).Examples of some early, freeware static analysis tools include:

Rough AuditingTool for Security (RATS). As its name implies,RATS performsonly a rough analysis of source code. It will not find all errors andmay also flag false positives.`

Flawfinder. Flawfinderworks by using a built-in database of C/C++ functions with well-knownproblems. Flawfinder producesa list of potential security flaws,sorted by risk; by default the riskiest hits are shown first. Not everyhit is actually a security vulnerability, and not every securityvulnerability is necessarily found.

Flawfinder works by performing simple text pattern matching(although it does ignore comments and strings). Nevertheless,flawfinder can be useful in finding and removing securityvulnerabilities.(5)

It's theSoftware Stupid! [Security Scanner] (ITS4). ITS4is a simpletoot that statically scans C and C++ source code for potential securityvulnerabilities [Viega 00]. It is a command-line tool that works acrossUNIX and Windows platforms. ITS4 scans source code, looking forfunction calls that are potentially dangerous.

For some calls, ITS4 tries to perform some code analysis todetermine how risky the call is. In each case, ITS4 provides a problemreport, including a short description of the potential problem andsuggestions on how to fix the code. 

There are also a number of commercial static analysis tools,including Fortify, OunceLabs's Prexis/Pro,Coverity'sPrevent, andMicrosoft's PREfix andPREfast. These tools are generally more usableand useful than their freeware counterparts but the benefit you or yourorganization derives from using these tools will depend on the qualityof your existing code and other factors. We recommend that youcarefully evaluate these tools to determine which, if any, benefit yourdevelopment process.

Fortify offers a collection of source code analysis tools consisting ofSource Code Analysis (SCA) engine; Audit Workbench, Secure CodingRulepacks, and the Rules Builder. Fortify's SCA engine performs dataflow, semantic, control flow, and configuration analysis.

The default rule packs include a variety of definitions and rules,including tests for missing termination characters, unchecked returnvalues, and buffer lengths; TOCTOU flaws; and semantic violations.These rules may also be augmented by a vendor or developer. However,you cannot view or export the default rules to understand what is beingcaught or for use as a basis for augmentation.

Fortify shares a common flaw with other static analysis tools inthat it generates an excessive number of false positives. Falsepositives occur when the tool warns of a potential security flaw butnone exists. A false positive occurs, for example, when analyzingApache 2.0.53.

Figure8-8. Code segment from Apache 2.0.53

Figure 8-8 above shows asegment of the apr_briigade.c file from this release. Fortify warnsthat there is a memcpy ( ) on line 222 that takes an argument str thatmay not be checked for length.

Fortify provides you the entire data flow tracing the lifetime ofthat buffer starting from the initial access (for example, read() ->aprfiIe_read()-> pipe_bucket_read()->memcpyO ). When youexamine the dataflow, it becomes readily apparent that the size of thebuffer is tracked by the str_l en variable and that no overflow ispossible.

As a result, this particular error can be marked as “Reviewed/NotIssue” in the Audit Workbench. This state is maintained for othermembers of the development/maintenance team using Audit Workbench. Thisprocess works, but reviewing false positives can be time consuming,labor intensive, and frustrating.

Fortify's Source Code Analysis Suite supports Linux, UNIX, Windows,and MacOS. Fortify can analyze CIC++, Java, JSPs, PLSQI_, C#, ASPs, andXML.Prexis
Ounce Labs's product Prexis/Pro allows developers to scan their sourcecode to identify critical vulnerabilities in their code. Prexisidentifies programming flaws and highlights possible design flaws,providing in-context remediation advice.

Prexis/Engine, the source code scanning and security knowledge basecore of Prexis, locates and evaluates programming and design flaws thatmay introduce vulnerabilities. These flaws are identified using thePrexis security knowl edge base.

The knowledge base entries inform Prexis's contextual analysis andthe in-context remediation advice it provides. The CIC++ assessmentmodule (CAM++) enables Prexis/Engine to identify and reap the securityvulnerabilities for further analysis and remediation. The CAM++ moduleanalyzes source code to identify programming flaws, including bufferoverflow, privilege escalation, and race condition vulnerabilities.

Prexis uses contextual analysis to understand the context in which asecurity flaw exists, to determine whether the flaw represents avulnerability, and to determine the severity of potential risk it posesto the organization.

Coveritys Prevent tool(formerly known as SWAT) searches for defects ingeneral, including some security flaws. Prevent is based on meta-levelcompilation (MC), which was implemented in two parts: an open compiler(xgcc) and a language for writing extensions (Metal). Coverity waslater founded by Stanford professor Dr. Dawson Engler and four PhDstudents in the Computer Science Laboratory at Stanford University onthe Meta-Level Compilation Project.

Coverity Prevent can detect many of the common security flaws,including buffer overflows, integer overflows, missing/insufficientvalidation of malicious data and string input, and format stringvulnerabilities. Coverity Prevent is supported on Linux, HPUX, FreeBSD,Window, Solaris Sparc, and Solaris X86 platforms and works with thefollowing compilers: Sun CC, MS Visual Studio, GCC, G++, Arm CC,Metrowerks CodeWarrior, and the Intel Compiler for C/C++.

PREfix and PREfast
Microsoft's PREfix and PREfast tools are both used to analyze code anddetect potential defects. PREfix performs path-sensitive,interprocedural analysis after code is checked in as opposed to duringtesting or postrelease. PREfast performs simple, intraprocedural checkson code before it is checked in

These tools originally focused on the classic reliability defectssuch as uninitialized memory NULL or invalid pointers, memory leaks,and state violations. More recently, checks have been added forsecurity flaws such as buffer overflows and format string defects.

PREfix works by walking the abstract syntax tree (AST) to followvarious execution paths. The symbolic execution state is tracked in avirtual machine. The auto modeler generates behavioral descriptions(models) of each function from the virtual machine's information, Erroranalysis then finds and reports defects based on the state of thevirtual machine. The analysis is neither complete nor sound but workswell in practice [Pincus 02]. Analysis plans to include PREfast inVisual Studio 2005 to scan applications built in C++.Quality Assurance
There is a strong correlation between normal code defects andvulnerabilities [Alhazrni 0S]. As a result, decreasing software defectscan also be effective in eliminating vulnerabilities (although it isalways more efficient to directly target security flaws). This sectiondiscusses some duality assurance techniques that have been specificallyapplied toward improving application security, including penetrationtesting, fuzz testing, code audits, developer guidelines andchecklists, and independent security reviews.

Penetration Testing. Penetrationtesting generally implies probing an application, system, or networkfrom the perspective of ail attacker searching for potentialvulnerabilities. Penetration testing is useful, especially if anarchitectural risk analysis is used to drive the tests.

The advantage of penetration testing is that it gives a goodunderstanding of fielded software in its real environment. However, anyblackbox penetration testing that does not take the softwarearchitecture into account probably will not uncover anything deeplyinteresting about software risk.

Software that fails canned black-box testing which simplisticapplication security testing tools on the market today practice – istruly bad. This means that passing a cursory penetration test revealslittle about the system's real security posture, but failing an easy,canned penetration test indicates a serious, troubling oversight.

Testing software to validate that it meets security requirements isessential. This testing includes serious attempts to attack it andbreak its security as well as scanning for common vulnerabilities. Asdiscussed earlier, test cases can be derived from threat models, attackpatterns, abuse cases, and specifications and design. Both white-boxand black-box testing are applicable, as is testing for both functionaland nonfunctional requirements.

Fuzz Testing. Fuzz testingis a method of finding software security holes by feeding purposelyinvalid and ill-formed data as input to program interfaces. Most fuzztesting is highly inefficient and requires a high volume of testing,using multiple variations and test passes. As a result, fuzz testinggenerally needs to be automated.

Fuzz testing is generally good at finding problems related toreliability; some subset of which may be vulnerabilities. For example;Forrester and Miller fuzz-tested over 30 GUI-based applications onWindows NT by subjecting them. to streams of valid keyboard and mouseevents and streams of random Win32 messages [Forrester 00].

When subjected to random valid input that could be produced by usingthe mouse and keyboard, 21 percent of applications tested crashed andan additional 24 percent of applications hung. When subjected to rawrandom Win32 messages, all the applications crashed or hung.

Fuzz testing is one of several ways of attacking a network andapplication interfaces to discover implementation flaws. Other methodsinclude reconnaissance, sniffing and replay, spoofing (valid messages),flooding (valid/invalid mes sages), hijacking/man-in-the-middle,malformed messages, and out-of-sequence messages.

The goals of fuzz testing can vary somewhat depending on the type ofinterface being tested. When testing an application to see if itproperly handles a particular protocol, for example, goals includefinding mishandling of truncated messages, incorrect length values; andillegal type codes that can lead to unstable operation protocolimplementations.

Code Audits. Source codeshould be audited or inspected for common security flaws andvulnerabilities. When looking for vulnerabilities, a good approach isto identify all points in the source code where the program acceptsuser input from an untrusted source and ensure that these inputs areproperly validated. Any C library functions that are highly susceptibleto security flaws should be carefully scrutinized.

Source code audits can be used to detect all classes ofvulnerabilities but depend on the skill, patience, and tenacity of theauditors. However, some vulnerabilities can be difficult to detect. Abuffer overflow vulnerability was detected in the lprm program, forexample, despite its having been audited for such problems [Evans 98].

Code audits should always be performed on security-criticalcomponents such as identification and authorization systems. Expertreviewers may also be helpful, for example, in identifying instances ofad hoc security or encryption and may be able. to advise the use ofestablished and proven mechanisms such as professional-gradecryptography.Developer Guidelines andChecklists
Checklist-based design and code inspections can be performed to ensurethat designs and implementations are free from known problems.Microsoft, for example, maintains an index of checklists on their Website. Checklists are also a part of the TSP-Secure process.

While checklists can be a useful tool, they can also be misused -most commonly by providing someone with a checklist when that persondoes not understand the true nature of the items on the list. This canlead to missing known problems or to making unnecessary or unwarrantedchanges to a design or implementation.

Checklists serve three useful purposes. First, they serve as areminder of things that we already know, so we remember to look forthem. Second; they serve to document what problems the design or codehas been inspected for and when these inspections took place. Third(perhaps the most valuable and most overlooked purpose), they serve asa means of communicating common problems between developers.

Checklists are constantly evolving. New issues need to be added. Oldissues that no longer occur (possibly because their solutions have beeninstitutionalized or technology has made them obsolete) should beremoved from the checklist to not consume continuing effort. Decidingwhich items should remain or be removed from a checklist should bebased on the effort required to check for those items and the actualnumber and severity of defects discovered.

Independent Security Review
Independent security reviews can vary significantly as to the natureand scope of the review. Security reviews can be the whole focus or acomponent of a wider review Independent reviews are often initiatedfrom outside of a development team. If done well, they can makevaluable contributions (to security); if done badly; they can distractthe development team and cause effort to be directed in less thanoptimal ways.

Independent security reviews can lead to more secure systems.External reviewers bring an independent perspective, for example, inidentifying and correcting invalid assumptions. Programmers developinglarge complex sys tems are often too close and miss the big picture.

For example, developers of security-critical applications may spendconsiderable effort on specific aspects of security while failingentirely to address some other vulnerable areas.

Experienced reviewers will be familiar with common errors and bestpractices and should be able to provide a broad perspective-identifyingprocess gaps, weaknesses in the architecture, and areas of the designand implementation that require additional or special attention.

Independent security reviews can also be useful as a managementtool. Commissioning an independent review and acting on the findings ofthe review can assist management in demonstrating that they have metdue dili gence requirements.

Additionally, an organization's relationship with regulatory bodiesis often improved with the added assurance of independent reviews. Itis also common for organizations to commission independent securityreviews with the intention of making public statements about theresults of the reviews. This is particularly the case when a positivereview results in a well-recognized certification.

Next in Part 5: Memory permissionsand defense in depth
To read Part 1, go to “Secure software development principles.”
To read Part 2, go to “Systems Quality Requirements Engineeiring.”
To read Part 3, go to “Staticanalysis and quality assurance“.

(Editor's note: For more onembedded security, check out the cover story in the Octoberissue of Embedded SystemsDesign Magazine: Embedded systems security has moved to theforefront”, as well as “Employ a secure flavorof Linux.” )

Thisarticle is excerpted with permission from the book titled “SecureCoding in C and C++”, by Robert C. Seacord, published and allcopyrights held by Addison Wesley/Pearson Education. It can bepurchased on line.

Robert Seacord is senior vulnerability analyst with the CERT/Corrdination Center of theSoftware Engineering Institute. Noopor Davis is a visiting scientistwith the SEI Software Engineering Management Program. Chad Dougherty isa member of the technical staff for the SEI Networked SystemsSurvivability Program. Nancy Mead is a senior member of the technicalstaff for the SEI Networked Systems Survivability Program. Robert Meadis a member of the technical staff for the SEI Networked SystemsSurvivability Program.

[Alhazrni 0S]. Alhazmi, O. et. al. “Security vulnerabilities insoftware systems.” Technical Report. Computer Science Department,Colorado State University, 2005.
[Evans 98]. “Nasty securityhole in 'lprm' (Bugtraq Archive).” 1998
[Forrester 00] Forrester, J.E. and B.P. Miller. “An Emprical study ofthe robustness of Windows NT applications using random testing.”Proceedings of the Fourth USENIX Windows System Symposium.
[Pincus 02]. Pincus, J.Infrastructure for Correctness tools. .
[Viega 00] Viega, J. et. al. “ITS4: A statucvulnerability scanner for C and C++ code.” Proceedings of theSixteenth Annual Computer Security Application Conference (ACSAC'00).

Other recent articles onsecurity
Securingwireless MCUs is changing embedded systems design
Stateof security technology: embedded to enterprise
Securiingmobile and embedded devices: encryptioon is not security
Guidelinesfor designing secure PCI PED EFT terminals
Overcomingsecurity issues in embedded systems
Buildingmiddleware for security and safety-critical applications
Securityconsiderations for embedded operating systems
Diversityprotects embedded systems
Aproactive strategy for eliminating embedded software vulnerabilities
Understandingelliptic curve cryptography
Securingad hoc embedded wireless networks with public key cryptography

Aframework for considering security in embedded systems
Calculatingthe exploitability of your embedded software
Badassumptions lead to bad security

Securingembedded systems for networks
Implementingsolid security on a Bluetooth product
Smartsecurity improves battery life
Howto establish mobile security
Ensuringstrong security for mobile transactions
Securingan 802.11 network

3 thoughts on “How to write secure C/C++ application code for your embedded design: Part 4

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.