Using software verification techniques in non-safety critical embedded software designs - Embedded.com

Using software verification techniques in non-safety critical embedded software designs

The aerospace and automotive industries are well known for therigorous software quality standards to which they must adhere, butdevelopers of non-safety critical systems can also benefit from thetype of standards employed within these industries. In addition toproducing reliable software, development costs can be contained andrequirements met more easily by observing a software quality process.

Software development often proves far more expensive than expected;bugs discovered late in the development cycle send costs soaring andrisk the integrity and safety of a system, especially if the softwarehas been deployed. Obviously, careful planning, organization and a teamwith the correct skills all help. However, it is verification andvalidation (V&V) that identify when and how the development processdrifted from what was intended or required by the user.

What's the difference between verification and validation?Validation focuses on producing the right software system whileverification ensures the software is built the right way. V&Vshould be evident at each stage of development and conducted withreference to the outputs from previous stages.

Verification is at the hub of a quality process, evaluating whetheror not a product, service, or system complies with a regulation,specification, or conditions imposed at the start of a developmentphase.

Since its inception in the early 1970s, the sequential waterfallmodel serves as a framework for modern alternatives (Figure 1 below ).Typically, earlier phases need to be revisited as developers workiteratively and requirements come together, as users test prototypeversions of the system.

Because of this iterative approach, it is even more important toapply suitable V&V techniques at each stage and within eachiteration of the cycle.

Figure 1: The familiar waterfall process for a typical software developmentlife-cycle.

Evidence indicates that the earlier a defect is discovered indevelopment the less impact it has on both the timescales and cost.There is much to gain by ensuring that requirements are captured infull, they are well understood, and they are specified completely andunambiguously.

Formal methods of tracking requirements are based on a mathematicalapproach to specification, development and verification of software andhardware systems. Formal methods can vary from using commonly acceptednotation to the full formality of theorem proving or automateddeduction ” a method of proving mathematical theorems by a computerprogram ” which is currently the most developed subfield of automatedreasoning (AR).

Although the cost of using formal methods often limits them toapplications where a high level of software integrity is required, somedegree of formal specification reaps benefits at later stages in theprocess for any software application. It is at this point that testdata is constructed so that the system may be later validated againstrequirements.

Traditionally, the design of large systems follows a top-down,functional decomposition approach. The system is broken down intosubsystems, which pass data and control across defined interfaces.

Subsystems normally comprise a number of program modules or unitsand each module has a number of routines which perform distinct tasks.Coupling and cohesion are two important quality metrics consideredduring the design of a software system. A high degree of couplingimplies an over-dependency between modules and indicates poorstructure. Modules with low cohesion are typically difficult tomaintain, test, reuse, and even difficult to understand.

Object-oriented (OO) methodologies add a further dimension, withdata and functionality encapsulated by the concept of an object whichinteracts with other objects. An OO design is driven by the choice ofobject types, or classes, and their representation.

The quality of a design can be improved and verified by adhering torecognized practices and patterns. For example, the use of 'multipleinheritance' within class hierarchies is often restricted insafety-critical systems, to avoid dangerous or confusingconfigurations. While this can be checked at the programming stage withautomatic analysis tools, it should be considered at the design stage (Figure 2 below ).

Figure2: The derived class, IOFile, inherits attributes from both InputFileand OutputFile, which both inherit from File. While this seemsperfectly fine, there may be duplicated attributes, such as file name,which is derived from the uppermost class File. An inheritancehierarchy with more than one path between a base class and a derivedclass introduces a number of issues and should be avoided or counteredin some way to remove any ambiguity.

With the advent of model-based designs, much verification can now beautomated. Unit testing, previously applied only to code, can beconducted in simulations. Given a particular precondition and series ofinputs, the outputs and post-conditions may be checked. The model caneven be instrumented such that intermediate conditions can also bechecked to ensure the correctness of different paths through thedesign. It is quite possible to achieve the correct results for thewrong reasons through co-incidental correctness.

Source code generally brings the first opportunity to applysophisticated tools to verify and test an application, and it is alsothe point at which many of the defects are introduced. Poor programmingpractices and informal testing contribute to software that both failsto perform correctly and is difficult to understand and maintain. Whilemany organizations use style guides to promote conformity and encouragegreater care, it is only a small step towards the compliance strivenfor by developers of safety-critical systems.

Languages such as C and C++ offer great flexibility that comes withthe potential for unsafe use of the language. The Motor IndustrySoftware Reliability Association (MISRA), with the introduction ofMISRA-C++:2008, has devised a set of rules that restrict the developerto a safe subset of the C++ language that aims for consistency acrossplatforms. The C++ guidelines build on the MISRA-C standard originallyreleased in 1998 and then improved in 2004. Constructs with behaviordeemed unspecified, undefined, indeterminate, or implementation-definedare prohibited or at least advised to be used with caution. Staticchecking tools enforce compliance.

Typically a programming standard contains a large number of rulesand guidelines. However, a relatively new coding standard, titled 'ThePower of 10: Rules for Developing Safety-Critical Code', has beendevised by the Jet Propulsion Laboratory (JPL) and is restricted toonly 10 verifiable coding rules. JPL recognizes the importance oftool-based checking for large and complicated applications, butbelieves that for a coding standard to be understood and followed, itshould not contain too many rules. The theory is that a small,carefully selected rule set, which generally omits style preferences,is more likely to be enforced and will still detect many of the causesleading to software defects.

Static analysis is a verification activity that involves lexical andsyntactic analysis of the source code of a single file or a completesystem. Programming standards, which may be either rules or guidelines,are applied to the code and any violations of those standards arereported.

The perceived quality of software under analysis classifiesprogramming rules/guidelines by:

Portability: Theserules highlight programming constructs that vary across differentcompilers.

Dependability: These rules expose unsafe code likely to impact performance orreliability.

Testability: These rules detect features that cause the testing to be moredifficult.

Maintainability: These rules detect features difficult to understand, that would impactupdates or revisions, such as redundancy and reuse of identifier names.

Complexity: These ruleshighlight complex code so that it can be simplified or greater cautionshould be exercised when modifying.

Style (S): These rules ensure that code follows the same style and adheres torecognized good programming practices.

Verification should also detect as many program faults as possiblesince program faults lead to execution failure in the form of incorrectprogram outputs, abnormal termination, or indeed lack of termination.However, software defects form a wider class of problems that includesfaults. If all defects are removed, then a wide class of faults mayalso be removed.

Dynamic analysis and unit testing add further confidence that softwareis correct. In this, the code is executed and the results interpretedto provide metrics for code coverage, and expected versus actualresults. At the end of the implementation phase, software units areintegrated and tested as sub-systems and the full system. Onceintegration testing is complete and the product is ready for deliveryto the customer, then the next and final phase is acceptance testing.

Acceptance testing is a formal testing process conducted under thedirection of the software users to determine if the operationalsoftware system meets their needs as defined in the requirements. Eachacceptance test attempts to test the functionality as required by theuser. These tests are different from unit tests in that unit tests aremodeled and written by the developer of each module, while theacceptance test is modeled and possibly even written with or by thecustomer.

During the software development, formal requirements specification,static analysis, dynamic analysis, and testing strategies allcontribute to software quality and fault reduction. Once faults areminimized, fault handling at run-time protects software from unforeseenevents that can occur during the operation of complex softwareapplications.

Software quality is improved with adherence to a softwaredevelopment process and prescribed design and coding standards. Toolshelp immensely with automatically checking against those standards.

However, tool qualification is required to ensure compliance. Makesure your chosen tool can detect an example of each defect or evidencethat the tool can perform the activity or relevant process.

Paul Humphreys is a softwareengineer with LDRA Ltd..

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.