The end of the develop-first, test-later approach to software development

February 20, 2018

Mark Pitchford - LDRA-February 20, 2018

No one would accept an engineering approach for a suspension bridge that was based on guessing at steel cable sizes and then loading the deck to see whether it collapsed, or sizing elevator motors by trying them out to see whether they caught fire. And yet these approaches are exactly analogous to how software developers often approach their work.

The development cycle for software is largely reactive, where code is developed using an informal agile approach, with no risk mitigation and no coding guidelines. Executables are then subjected to performance, penetration, load and functional tests to try to find the vulnerabilities that almost certainly result. The hope is that all issues will be found and holes adequately plugged. But whether a product is safety-critical or not, it’s time for software developers to embrace the same sound processes as other engineering disciplines. That consists of defining requirements, creating a design to fulfil those requirements, developing a product that is true to the design, and then testing it to show that it is. Even in this process, developers have alternatives to choose from, such as CERT C’s application-centric approach to the detection of issues, versus MISRA’s ethos of using design patterns to prevent their introduction. With either approach, software developers must learn to design in security up front rather than hope to remove insecurity later.

Safe & Secure Application Code Development

The traditional approach to secure software development is mostly a reactive one – develop the software and then use penetration, fuzz and functional test to expose any weaknesses. In isolation, however, that is not good enough to comply with a functional safety standard such as DO-178C (in the aerospace sector), IEC 62304 (medical devices) or ISO 26262 (automotive). These demand that security factors with a safety implication are considered from the outset, because a safety-critical system cannot be safe if is not secure.

Using ISO 26262 as an example, Figure 1 illustrates a V-model with cross-references to the ISO 26262 standard and tools likely to be deployed at each phase in the development of complex automotive software. (Other process models such as agile and waterfall can be equally well-supported.) The application of such a process does not negate the value of penetration and fuzz testing, but allows these techniques to provide evidence of system robustness rather than exposing their vulnerabilities.

click for larger image

Figure 1: Software-development V-model with cross-references to ISO 26262 and standard development tools (Source: LDRA)

The outputs from the system design phase (top left) includes technical safety requirements refined and allocated to hardware and software. In a connected system, these include security requirements because the action to be taken to deal with each safety-threatening security issue needs to be proportionate to the risk. Maintaining traceability between these requirements and the products of subsequent phases, however, can cause a major project management headache.

The specification of software requirements involves their derivation from the system design, isolating the software-specific elements and detailing the evolution process of lower-level, software-related requirements, including those with a security-related element.

click for larger image

Figure 2: Graphical representation of Control and Data Flow as depicted in the LDRA tool suite (Source: LDRA)

Next comes the software architectural design phase, perhaps using a UML graphical representation. Static analysis tools provide graphical representations of the relationship between code components for comparison with the intended design (Figure 2).

Figure 3 illustrates a typical example of a table from ISO 26262-6:2011 relating to software design and implementation. It shows the coding and modelling guidelines to be enforced during implementation, along with an indication of where compliance can be confirmed with the automated tools.

click for larger image

Figure 3: ISO 26262 coding and modelling guidelines (Source: LDRA)

The “use of language subset” (topic 1b) exemplifies the impact of security considerations. Language subsets have traditionally been viewed as an aid to safety, but security enhancements to the MISRA C:2012 standard and security-specific standards such as CWE and CERT C reflect an increasing interest in the role they have to play in combating security issues. These can also be checked by means of static analysis (Figure 4).

click for larger image

Figure 4: Coding standards violations as represented by the LDRA tool suite (Source: LDRA)


Dynamic analysis techniques (involving the execution of some or all of the code) are applicable to unit, integration and system testing. Unit testing focuses on particular software procedures or functions in isolation, whereas integration testing ensures that safety, security and functional requirements are met when units are working together in accordance with the software architectural design.

click for larger image

Figure 5: Unit testing with the LDRA tool suite (Source: LDRA)

Figure 5 shows how the software interface is exposed at the function scope, allowing the developer to enter inputs and expected outputs to form f a test harness. That harness is then compiled and executed on the target hardware, and actual and expected outputs compared. This technique shows functional correctness in accordance with requirements, as well as resilience to issues such as border conditions, null pointers and default switch cases – all important security considerations.

In addition to showing that software functions correctly, dynamic analysis is used to generate structural coverage metrics. Both MISRA C:2012 (Dir 3.1) and the security standard CWE (Figure 6) require that code coverage analysis is used to ensure that there is no hidden functionality that can increase an application’s attack surface and expose weaknesses.

click for larger image

Figure 6: CWE requirement for code coverage analysis (Source: CWE)

Choosing a Language Subset

Although there are several language subsets (or less formally, “coding standards”) to choose from, these have traditionally been focused primarily on safety rather than security. With the advent of the Industrial Internet of Things and connected cars and medical devices, that focus has shifted towards security since these systems, once naturally secure through isolation, are now increasingly accessible to aggressors.

There are, however, subtle differences between the differing subsets.

Retrospective adoption

MISRA C:2012 states that “MISRA C should be adopted from the outset of a project. If a project is building on existing code that has a proven track record then the benefits of compliance with MISRA C may be outweighed by the risks of introducing a defect when making the code compliant.” This contrasts with the assertion of CERT C that although “the priority of this standard is to support new code development…. A close-second priority is supporting remediation of old code.”

The level of risk involved with the compromise of the system impacts the approach. While the retrospective application of any subset is better than nothing, it does not represent best practice.

Relevance to safety, high-integrity and high-reliability systems

MISRA C:2012 “define[s] a subset of the C language in which the opportunity to make mistakes is either removed or reduced. Many standards for the development of safety-related software require or recommend a language subset, and this can also be used to develop any application with high-integrity or high-reliability requirements.” The accurate implication of that statement is that MISRA C was always appropriate for security-critical applications even before the security enhancements introduced by MISRA C:2012 Amendment 1.

CERT C attempts to be more all-encompassing, covering application programming (e.g., POSIX) as well as the C language. That is reflected in its introductory suggestion that “safety-critical systems typically have stricter requirements than are imposed by this standard … However, the application of this coding standard will result in high-quality systems that are reliable, robust, and resistant to attack.”

Continue reading on page two >>


< Previous
Page 1 of 2
Next >

Loading comments...