Adopting aerospace development and verification standards: Part 1 – A coding standards survey - Embedded.com

Adopting aerospace development and verification standards: Part 1 – A coding standards survey

An ever-increasing reliance on software control and the nature of the applications has led many companies to undertake safety-related analysis and testing. Consider the antilock braking and traction control of today’s automobile. These safety systems are each managed by software running on a networked set of processors; the failure of any of these systems sparks major safety concerns that could lead to recalls and lawsuits.

Companies concerned about safety in their products are looking outside their own market sector for best practice approaches, techniques, and standards. Examples of such industry crossover have been seen in the automotive and avionics industries with the adoption of elements of the DO-178C standard by automotive and a similar adoption of the MISRA [3] standards by avionics.

Historical background: DO-178C
In the early 1980s the rapid increase in the deployment of software in airborne systems and equipment  resulted in a need for industry guidelines that satisfied airworthiness requirements. Document DO-178, “Software Considerations in Airborne Systems and Equipment Certification,” was written by RTCA [1] and EUROCAE [2] to meet this need.

In 1992, a revised version, DO-178B, was published. DO-178B went on to become the defining standard by which aerospace companies worldwide develop systems and software. Given the success of that standard, it was no surprise when its 2011 successor, DO-178C, adhered to the same principles. In addition to clarifying the interpretation of previous guidelines, the primary changes concern contemporary development techniques and address the use of formal methods, model-based development, object-orientated technologies, and software tools.

DO-178C is primarily a process-oriented document. The standard defines objectives for each process and outlines a means of satisfying the objectives. A system safety assessment process determines the significance of failure conditions in the system and its software components. (The more significant the system, the more severe the consequences should it fail.) This safety risk assessment forms the basis for establishing the infamous A-E software levels, which correlate the level of effort required to show adherence to certification requirements with the extent of risk associated with a system.

DO-178C Level A, Section 2.3.3 of the DO-178C standard defines software levels: “Level A: Software whose anomalous behavior, as shown by the system safety assessment process, would cause or contribute to a failure of system function resulting in a catastrophic failure condition for the aircraft.”

This type of safety analysis is becoming ever more applicable to automobiles, nuclear power plants, MRI scanners, and financial systems — in fact any system where failure has major implications.

The challenges of more stringent safety testing standards
In adopting out-of-sector quality and testing standards, new and unfamiliar development and testing techniques need to be implemented, potentially including:

  • Conformance to a set of coding standards, such as MISRA-C [4] or JSF++ AV [5], becomes a mandatory requirement for the software and companies that are adopting tools to automate code checking.
  • Formal unit testing that demonstrates requirements are satisfied as they are incrementally implemented. This is often used alongside informal debugging.
  • Code coverage analysis that validates the effectiveness of testing and exposes non-executable code. 
  • Full code coverage analysis down to the object code level of critical software components.

Coding standards
Restrictions on the use of certain language features have been around almost as long as high-level languages themselves. Global variables have been frowned on since the introduction of subroutines, and programmers using “goto” statements had better have a good reason for it. Such guidelines, while restrictive, avoid the runtime errors likely to result from their use.

For many reasons, such as its inherent flexibility and its potential for portability across a wide range of hardware, C became the language of choice for the development of real-time embedded applications within the automotive industry. C has most of the features a software development team could wish for and, in the right hands, it can be used to write well laid out, structured, and expressive code. In the wrong hands, however, its flexibility can result in extremely hard to understand code. Clearly, this is not acceptable for a safety-related system.

Recognizing the widespread use of C through compilers and tools, MISRA published their C standard to promote the use of “safe C” in the UK automotive industry. The standard quickly found a home in all applications, and MISRA now seeks only to promote the safest possible use of the language. The guidelines encourage good programming practice, focusing on coding rules, complexity measurement, and code coverage to ensure well-designed, adequately tested code which ultimately results in safer applications. The standard’s success has led to improved versions, first MISRA C:2004 and then MISRA C:2012.

The standard is extremely flexible, allowing particular rules to be omitted with appropriate justification. The 2012 version does include some mandatory rules, however. The following summary of such a rule gives you a sense of the standard and its aim:

With the increased dependency on software in complex systems, more programming languages are being used. C++ is expected to become a major player in future software projects across all industries. However, the MISRAC standard does not comment on the suitability of C++ for use in safety-related systems, and this presented a problem to Lockheed Martin when it decided to standardize on both C and C++ for the key avionics control systems of the Joint Strike Fighter (JSF). Lockheed Martin decided to build on the MISRA C guidelines by adding an additional set of rules specific to the appropriate use of C++ language features (e.g., templates, inheritance). The result was the JSF Air Vehicle (AV) C++ coding standard, which ensures that all compliant software is developed to a consistent style, portable to other architectures, free from common errors, and easily understandable and maintainable by any team member. The JSF AV standard was a major step forward in the use of C++ within safety-related systems.

However, the opinion across industry and relevant organizations was that JSF AV was best for JSF and not ideal as a general solution. To gain broader applicability, MISRA developed a C++ standard in 2008, building on the JSF AV foundation. With well-formed, peer-reviewed standards for both C and C++ managed by one organization, software teams are now able to rely on cohesive and uniform guidelines for the principal embedded development languages.

This consistency of standard has helped the industry transition from manual compliance checking – which is tedious, error prone, and time consuming – to automated methods. Automated methods have the added benefit of being able to demonstrate to a certification authority that the source code is 100% conformant. The use of tools to automate the code review process has created a fast, repeatable process with quality reports that enhance the efforts of the development team.

The MISRA programming guidelines, when used within the process framework of DO-178C, provide an extended development model that addresses issues of both quality and reliability. Less obviously, projects adopting this joint approach also experience significant cost savings, and that can only benefit non-aerospace industries as the need for quality increases.Functional and unit testing
Software teams in any industrytest components, but testing is often the final phase of development andis inevitably squeezed as earlier phases overrun their time allotmentand the delivery date must still be met. As a result, it is fairlycommon in non-aerospace industries to deliver substandard softwarecomponents simply because there isn’t time to complete sufficienttesting.

DO-178C Software Testing Process. Section 6.4 of theDO-178C standard defines the objectives of testing: “Software testing isused to demonstrate that the software satisfies its requirements and todemonstrate with a high degree of confidence that errors which couldlead to unacceptable failure conditions, as determined by the systemsafety assessment process, have been removed.”

So a majorchallenge for companies is to improve their overall software developmentprocess to allow sufficient time for testing. New developmenttechniques, such as unified process, agile development, and extremeprogramming, attempt to formalize the development process and solve manyof the classic problems and failures. In contrast, even if time isavailable most companies simply perform straightforward functionaltesting at the system and/or subsystem levels. This method is a highlyprocedural part of a top-down process of system validation.

Functional testing (Figure 1 )is only as good as the requirements against which the tests have beendeveloped. Studies such as the Chaos Report [6] repeatedly illustratethat a huge number of software projects fail (meaning they had cost ortime overruns or didn’t fully meet the user’s needs) and a major reasonfor failure is problems with requirements, whether as a result ofoverwhelming complexity, ambiguous and imprecise definition or scope,creep across the life of the project. The other disadvantage to usingonly functional testing is the obvious precondition that the system (orsubsystem) under test must be coded and functional before testing canbegin.

To gain the benefits of early testing, many aerospacecompanies now employ iterative development processes, which hascontributed to an improvement in failure rates in recent years.Iterative development focuses on system subsets and is modular indesign. An appropriate technique, typically called unit testing, is abottom-up process that focuses on system internals, such as classes andindividual functions. Not only does unit testing facilitate early stageor prototype development, it can also be used to cover the paths andbranches in the software that may be unpredictable or otherwise areimpractical to exercise from a functional testing perspective (e.g.,error handlers).

By definition, unit testing aims to verify asmall portion of the whole system, an incomplete portion that cannotexecute independently. Therefore, test drivers and harnesses arerequired to deliver input values, record outputs, stub missingfunctionality, and build an executable environment encompassingeverything. Immediately, we begin to understand why unit testing isunder-used by up to 90% of software engineers:

  • There is a huge overhead associated with manually creating test scripts as well as maintaining these elements whenever there are changes to requirements, design or code.
  • The test scripts, harnesses and drivers are also software and are thus prone to the same failings of any manually created software.
  • The component to be tested has been implemented using language features, such as data hiding, which make it very difficult to provide input values or verify outputs.
  • The lack of a unified and structured method means that techniques are applied on a project-by-project basis with little opportunity for reuse via industry-wide standards.

Figure 1: Typical functional testing harness

Manyof the problems associated with the implementation of traditionalmanual unit testing processes are concerned with the high skill levelsrequired and the considerable additional overhead that such techniquescan impose.

Automation of these processes with the use of toolsenables the techniques to be made more standardized yet intuitive -highly desirable goals with potential benefits of increased efficiencyand reduced costs. Automation also permits the development of repeatableprocesses and the standardization of testing practices. Often toolscapture and store complete test information that can be held in aconfiguration management system with the corresponding source code, thenretrieved and imported at a later date for instant regression testing.

What’sgained from functional and unit testing is proof that softwaresatisfies its requirements and that errors have been removed. What wedon’t yet know is how complete the testing effort has been. That’s wheresource and object code verification comes in. Source and object codeverification is the subject of Part 2 in this series.

References

1.RTCA Inc. (originally the Radio Technical Commission for Aeronautics)is a private, not-for-profit corporation that develops consensus-basedrecommendations regarding communications, navigation, surveillance, andair traffic management (CNS/ATM) system issues.

2. EUROCAE, theEuropean Organization for Civil Aviation Equipment, is a nonprofitorganization which provides a European forum for resolving technicalproblems with electronic equipment for air transport.

3. The MotorIndustry Software Reliability Association (MISRA) is a collaborationbetween vehicle manufacturers, component suppliers and engineeringconsultants which seeks to promote best practice in developingsafety-related electronic systems in road vehicles.

4. “Guidelinesfor the use of the C language in critical systems”, published first byMISRA Limited in October 2004 and again in March 2013 aftercomprehensive revision. These standards are complete reworks of theoriginal set published in 1998.

5. “Joint Strike Fighter (JSF) AirVehicle (AV) C++ Coding Standards for the System Development andDemonstration Program”, document number 2RDU00001 Rev D, June 2007.These standards build on relevant portions of the MISRA-C standards withan additional set of rules specific to the appropriate use C++ languagefeatures (e.g., inheritance, templates, namespaces) in safety-criticalenvironments.

6. The Chaos Report from the Standish Group has beenregularly published since 1994. The 2006 report revealed that 35% ofsoftware projects could be categorised as successful, meaning they werecompleted on time, on budget and met user requirements. This is a markedimprovement over 1994 when only 16.2% of projects were labeled assuccessful.

7. The Orion Crew Exploration Vehicle (CEV) is aspacecraft currently under development by NASA, the contract for itsdesign and construction was awarded to Lockheed Martin in August 2006.

Mark Pitchford has over 25 years’ experience in software development for engineeringapplications. He has worked on many significant industrial andcommercial projects in development and management, both in the UK andinternationally, including extended periods in Canada and Australia.Since 2001, he has specialized in software test, and works throughoutEurope and beyond as a Field Applications Engineer with LDRA Ltd.

Bill St. Clair is currently Director, US Operations for LDRA Technology and LDRACertification Services and has more than 25 years in embedded softwaredevelopment and management. He has worked in the avionics, defense,space, communications, industrial controls, and commercial industries asa developer, verification engineer, manager, and company founder. Heholds a U.S. patent for a portable storage system and is inventor of apatent-pending embedded requirements verification system. Bill’sleadership was instrumental in adapting requirements traceability intoLDRA’s verification process.

1 thought on “Adopting aerospace development and verification standards: Part 1 – A coding standards survey

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.