Advertisement

Developing secure code using SPARK – Part 1

Benjamin M. Brosgol, AdaCore

June 01, 2013

Benjamin M. Brosgol, AdaCoreJune 01, 2013

The modern cyberworld is a dangerous place. Software must not only be reliable—i.e., perform its intended functions—but also robust: It needs to withstand attempts at subversion from the array of threats posed by malevolent sources. Implementing security as an add-on isn’t effective. Performing static analysis retrospectively on the source code of an existing system may uncover bugs and vulnerabilities but can never demonstrate their absence. Instead, developers have to consider preventive measures from the start, and for the most critical kinds of systems, implement an approach backed by mathematical rigor.

One technique is to use an appropriate programming language that permits specifying relevant security-oriented properties (“contracts”), and then demonstrate statically that these properties are satisfied by applying automated tools as the system is being designed and implemented.

The SPARK language, a formally analyzable Ada subset augmented with special comment syntax for contracts, illustrates this approach. SPARK has been used, in conjunction with a development methodology known as Correctness by Construction (CbyC), to produce a variety of systems that need to achieve the highest levels of security and/or safety.

Because defects are prevented or caught early, the traditional debugging phase is much reduced. The level of effort required by using SPARK is around the same as for conventional systems, but with the benefit of much higher assurance that the system does what it is supposed to do, and does not do what it is not supposed to do.

Software security basics
Security is commonly defined as the protection of assets against unauthorized attempts to compromise their confidentiality, integrity, or availability. In the context of a computer system, an asset needs to be considered in a very general sense. It may consist of sensitive data (employee salaries or social security numbers), physical equipment that the system is controlling or monitoring, or even—and especially—the internal data structures and executable code of the software itself. In a safety-critical system, the assets may be human beings—for example passengers and crew on an aircraft, or a patient receiving medication from an infusion pump.

The measures taken to achieve sufficient security depend on the value of the assets (or the costs associated with recovering from a security breach) and the perceived likelihood of the threats and the skills of potential attackers. These measures fall into two categories:

  • Security functions such as user authorization or data encryption algorithms that are implemented by the software as part of the system architecture, and
  • Security assurance steps taken by the developer to demonstrate that the security functions have been implemented correctly, and that the rest of the system contains no vulnerabilities that could be exploited to defeat these functions.

This partitioning is illustrated in the Common Criteria standard,1 which is basically a catalog of security-related functional and assurance requirements that can be used to evaluate the security of products across a variety of application domains. In the Common Criteria the security assurance requirements are grouped into seven increasingly demanding sets, known as evaluation assurance levels (EALs; see figure 1). The lowest level, EAL1, “…is applicable where some confidence in correct operation is required, but the threats to security are not viewed as serious.” 1 At the other extreme, EAL7 “…is applicable…in extremely high-risk situations and/or where the high value of the assets justifies the higher costs.” 1
Figure 1: The Common Criteria defines a range of evaluation assurance levels (EALs).

< Previous
Page 1 of 3
Next >

Loading comments...