Compilers in the alien world of functional safety - Embedded.com

Compilers in the alien world of functional safety

Across sectors, the world of functional safety puts new requirements on developers. Functionally safe code must include defensive code to defend against unexpected events that can result from a variety of causes. For example, memory corruption due to coding errors or cosmic ray events can lead to the execution of code paths that are “impossible” according to the logic of the code. High-level languages, particularly C and C++, include a surprising number of features whose behavior is not prescribed by the language specification to which the code adheres. This undefined behavior can lead to unexpected and potentially disastrous outcomes that would be unacceptable in a functionally safe application. For these reasons, standards require that defensive coding is applied, that code is testable, that it is possible to collate adequate code coverage, and that application code is traceable to requirements to ensure that the system implements them fully and uniquely.

Code must also achieve high levels of code coverage, and in some sectors—particularly automotive—it is common for the design to require sophisticated external diagnostic, calibration, and development tools. The problem that arises is that practices such as defensive coding and external data access are not part of a world that compilers recognize. For example, neither C nor C++ make any allowances for memory corruption, so unless code designed to protect against it is accessible when there is no such corruption, it may simply be ignored when the code is optimized. Consequently, defensive code must be syntactically and semantically reachable if it is not to be “optimized away.”

Instances of undefined behavior can also cause surprises. It is easy to suggest that they should be simply avoided, but it is often difficult to identify them. Where they exist, there can be no guarantee that the behavior of the compiled executable code will match the developers’ intentions. The “back door” access to data used by debugging tools represents yet another situation that the language makes no allowance for, and so can have unexpected consequences.

Compiler optimization can have a major impact on all of these areas, because none of them is part of the remit for compiler vendors. Optimization can result in apparently sound defensive code being eliminated where it is associated with “infeasibility”—that is, where it exists on paths that can’t be tested and verified by any set of possible input values. Even more alarmingly, defensive code shown to be present during unit testing may well be eliminated when the system executable is constructed. Just because coverage of defensive code has been achieved during unit test therefore doesn’t guarantee that it is present in the completed system.

In this strange land of functional safety, the compiler may be out of its element. That is why object code verification (OCV) represents best practice for any system for which there are dire consequences associated with failure—and indeed, for any system where only best practice is good enough.

Before and after compilation

Verification and validation practices championed by functional safety, security, and coding standards such as IEC 61508, ISO 26262, IEC 62304, MISRA C and C++ place considerable emphasis on showing how much of application source code is exercised during requirements based testing.

Experience has shown us that if code has been shown to perform correctly then the probability of failure in the field is considerably lower. And yet because the focus of this laudable endeavor is on the high-level source code (no matter what the language), such an approach places a great deal of faith in the ability of the compiler to create object code that reproduces precisely what the developers intended. In the most critical of applications, that implied assumption cannot be justified.

It is inevitable that the control and data flow of object code will not be an exact mirror of the source code from which it was derived, and so proving that all source code paths can be exercised reliably does not prove the same thing of the object code. Given that there is a 1:1 relationship between object code and assembler, a comparison between source and assembly code is telling. Consider the example shown in  Figure 1, where the assembler code on the right has been generated from the source code on the left (using a TI compiler with optimization disabled).


Figure 1: The assembler code on the right has been generated from the source code on the left, showing the telling comparison between source and assembly code. (Source: LDRA)

As illustrated later, when this source code is compiled, the flowgraph for the resulting assembler code is quite different to that for the source because the rules followed by C or C++ compilers permit them to modify the code in any way they like, provided the binary behaves “as if it were the same.”

In most circumstances, that principle is entirely acceptable – but there are anomalies. Compiler optimizations are basically mathematical transforms that are applied to an internal representation of the code. These transforms “go wrong” if assumptions do not hold – as is often the case where the code base includes instances of undefined behavior, for example.

Only DO-178C, used in the aerospace industry, places any focus on the potential for dangerous inconsistencies between developer intent and executable behavior—and even then, it is not difficult to find advocates of workarounds with clear potential to leave those inconsistencies undetected. However such approaches are excused, the fact remains that the differences between source and object code can have devastating consequences in any critical application.

Developer intent versus executable behavior

Despite the clear differences between source and object code flow, they are not the primary concern. Compilers are generally highly reliable applications, and while there may be bugs as in any other software, a compiler’s implementation will generally fulfill its design requirements. The problem is that those design requirements do not always reflect the needs of a functionally safe system.

In short, a compiler can be assumed to be functionally true to the objectives of its creators. But that may not be entirely what is wanted or expected, as illustrated in Figure 2 below with an example resulting from compilation with the CLANG compiler.


Figure 2 shows a compilation with the CLANG compiler (Source: LDRA)

It is clear that the defensive call to the ‘error’ function has not been expressed in the assembler code.

The ‘state’ object is only modified when it is initialized and within the ‘S0’ and ‘S1’ cases, and so the compiler can reason that the only values given to ‘state’ are ‘S0’ and ‘S1.’ The compiler concludes that the ‘default’ is not needed because ‘state’ will never hold any other values, assuming that there is no corruption—and indeed, the compiler makes exactly that assumption.

The compiler has also decided that because the values of the actual objects (13 and 23) are not used in a numeric context, it will simply use the values of 0 and 1 to toggle between states and then use an exclusive “or” to update the state value. The binary adheres to the “as if” obligation and the code is fast and compact. Within its terms of reference, the compiler has done a good job.

This behavior has implications for “calibration” tools that use the linker memory map file to access objects indirectly, and for direct memory access via a debugger. Again, such considerations are not part of the compiler’s remit and are therefore not considered during optimization and/or code generation.

Now suppose the code remains unchanged, but its context in the code presented to the compiler changes slightly, as in Figure 3.


Figure 3: The code remains unchanged but its context in the code presented to the compiler changes slightly. (Source: LDRA)

There is now an additional function, which returns the value of the state variable as an integer. This time the absolute values 13 and 23 matter in the code submitted to the compiler. Even so, those values are not manipulated within the update function (which remains unchanged) and are only apparent within our new “f” function.

In short, the compiler continues (rightly) to make value judgements about where the values of 13 and 23 should be used—and they are by no means applied in all situations where they might be.

If the new function is changed to return a pointer to our state variable, the assembler code changes substantially. Because there is now the potential for alias accesses through a pointer, the compiler can no longer deduce what is happening with the state object. As shown in Figure 4 below, it cannot conclude that the values of 13 and 23 are unimportant and so they are now expressed explicitly within the assembler.


Figure 4: If the new function is changed to return a pointer to our state variable, the assembler code changes substantially. It cannot conclude that the values of 13 and 23 are unimportant and so they are now expressed explicitly within the assembler (Source: LDRA).

Implications for source code unit test

Now consider the example in the context of an imaginary unit test harness. As a consequence of the need for a harness to access the code under test, the value of the state variable is manipulated and as a consequence the default is not “optimized away”. Such an approach is entirely justifiable in a test tool that has no context relating to the remainder of the source code and that is required to make everything accessible, but as a side effect it can disguise the legitimate omission of defensive code by the compiler.

The compiler recognizes that an arbitrary value is written to the state variable via a pointer, and again, it cannot conclude that the values of 13 and 23 are unimportant. Consequently, they are now expressed explicitly within the assembler. On this occasion it cannot conclude that S0 and S1 represent the only possible values for the state variable, which means that the default path may be feasible. As shown in Figure 5, the manipulation of the state variable achieves its aim and the call to the error function is now apparent in the assembler.


Figure 5: The manipulation of the state variable achieves its aim and the call to the error function is now apparent in the assembler. (Source: LDRA)

However, this manipulation will not be present in the code that will be shipped within a product, and so the call to error() is not really there in the complete system.

The importance of object code verification

To illustrate how object code verification can help to resolve this conundrum, consideragain the first example code snippet, shown in Figure 6:


Figure 6: This illustrates how object code verification can help to resolve how the call to error is not in the complete system. (Source: LDRA)

This C code can be demonstrated to achieve 100% source code coverage by means of a single call thus:

f_while4(0,3);

The code can be reformatted to a single operation per line and represented on a flowgraph as a collection of “basic block” nodes, each of which is a sequence of straight-line code. The relationship between the basic blocks is represented in Figure 7 using directed edges between the nodes.


Figures 7: This shows the relationship between the basic blocks using directed edges between the nodes. (Source: LDRA)

When the code is compiled, the result is as shown below (Figure 8). The blue elements of the flow graph represent code that has not been exercised by the call f_while4(0,3).

By leveraging the one-to-one relationship between object code and assembler code, this mechanism exposes which parts of the object code are unexercised, prompting the tester to devise additional tests and achieve complete assembler code coverage—and hence achieve object code verification.


Figures 8: This shows the result when the code is compiled. The blue elements of the flow graph represent code that has not been exercised by the call f_while4(0,3). (Source: LDRA)

Clearly, object code verification has no power to prevent the compiler from following its design rules and inadvertently circumventing the best intentions of developers. But it can, and does, bring any such mismatches to the attention of the unwary.

Now consider that principle in the context of the earlier “call to error” example. The source code in the completed system would, of course, be identical to that proven at unit test level and so a comparison of that would reveal nothing. But the application of object code verification to the completed system would be invaluable in providing assurance that essential behavior is expressed as the developers intended.

Best practice in any world

If the compiler handles code differently in the test harness compared to the unit test, then is source code unit test coverage worthwhile? The answer is a qualified “yes.” Many systems have been certified on the evidence of such artifacts, and proven safe and reliable in service. But for the most critical systems across all sectors, if the development process is to withstand the most detailed scrutiny and adhere to best practice, then source level unit test coverage must be supplemented by OCV. It is reasonable to assume that it fulfils its design criteria, but those criteria do not include functional safety considerations. Object code verification currently represents the most assured approach to the world of functional safety where compiler behaviors conform with standards, but nevertheless may have a significant negative impact.


Chris Tapp is a Field Applications Engineer at LDRA with more than 20 years experience of embedded software development. He graduated from the University of Durham in 1987 and has spent most of his career working within the automotive, industrial control and information technology industries, mainly as a self-employed consultant. He has been involved with MISRA since 2001 and is currently chairman of the MISRA C++ working group and an active member of the MISRA C working group. He has been with LDRA since 2007, where he specializes in programming standards.

 

1 thought on “Compilers in the alien world of functional safety

  1. And don’t even mention boost::units: so much source code, so little generated assembler 😉
    But what a boon to code expressiveness and correctness.

    It has been a long time since K&R.

    Log in to Reply

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.