Software Standards Compliance 101: Matching system target failure rates to development rigor -

Software Standards Compliance 101: Matching system target failure rates to development rigor


Editor's Note: After a series of fatal accidents, a formal investigation resulted in a recommendations on how to create safety-critical software solutions using a phased approach (see “About this series” at the end of this article). The first phase, Perform a system safety or security assessment, was discussed in “Software Standards Compliance 101: First assess your system’s risk,” the first article in this series. This article discusses Phase 2, Determine a target system failure rate, and Phase 3, Use the system target failure rate to determine the appropriate level of development rigor, with particular attention on how the IEC 62304 standard for Medical device software – Software life cycle processes approaches these objectives. Subsequent articles in this nine-piece series will address the other phases. 

In 2014, the FDA released a Medical Device Report that analyzed medical device recalls between 2003 and 2012. This report revealed that recalls increased by 97% during this period, naming device software design as a leading cause. To counter this trend, the International Electrotechnical Commission (IEC) introduced the IEC 62304 standard Medical device software – Software life cycle processes In 2006. This standard codified the current state of practice in developing software for medical devices. In doing so, IEC 62304 established a common framework for medical device software life cycle processes that today is necessary for the safe design and maintenance of such applications.

Rather than being an all-encompassing standard, IEC 62304 works in harmony with other international standards. In particular, it is assumed that medical device software is developed and maintained within a quality management system and a risk management system, the latter of which is specifically addressed in the standard by reference to ISO 14971; Medical devices — Application of risk management to medical devices .

Using Risk to Define Medical Device Classification
Like many of the industry standards that impact the development of software for safety-critical systems, the level of rigor applied to the software development of medical devices is driven by a risk assessment. This assessment determines how likely it is that a patient will be exposed to a particular hazard. Consider, for example, a radiotherapy machine. The radiation that these machines generate can be extremely intense, causing severe injury or death if overexposure occurs. The risk that a patient or therapist will be exposed to this hazard is determined by the severity of the potential injury that may be caused by the hazard in combination with the probability that the exposure will occur.

When building any medical device that includes software, developers must first determine whether software is a contributing factor to a hazard, either directly or indirectly. This is achieved by performing an ISO 14971 compliant risk assessment of the key functionality provided by the software. Simply put, this is a measure of whether the software system can contribute to a hazard that can affect the patient, operator or anyone else. Software developers can then classify software components as well as the software as a whole, based on the level of hazard presented to the patient and/or the device’s users. IEC 62304 defines three software safety classifications:

There is no one-size-fits-all process for developing software for a medical device; the Software Safety Classification merely defines the level of rigor that must be applied. This makes logical and practical sense; there is little business justification to exhaustively test Class A software, but the highest levels of rigor must be applied for Class C devices.

Control Development Rigor with System and Safety Classification
One of the best examples of how rigor is applied throughout the software development life cycle is software testing. It is a commonly accepted principle that testing software 100% simply isn’t possible. For example, in one of his lectures in An Introduction to Computers and Programming course in 2004, Prof. I. K. Lundqvist of MIT described a simple program containing five decision points that are looped through 20 times. To exhaustively test this program would require the execution of all possible execution paths (i.e., 520 in total). If tests were written to test one of these execution paths every millisecond, then the total time required to test this program would be over 3,000 years!

Clearly, exhaustive testing of software is not possible, but how much testing is enough? The answer lies in the checks and measures that can be used to assess overall test effectiveness, starting at the software requirements definition phase.

Starting with the requirements capture, higher levels of Software Safety Classification require a greater number of tasks and more detailed documentation. The need for every software component must be defined during the Software Development Planning phase and then traced through all of subsequent stages of the development life cycle, including:

  • Software requirements analysis
  • Software architectural design
  • Software detailed design
  • Software unit implementation and verification
  • Software integration and integration testing
  • Software system testing
  • Software release

At each phase, the level of development rigor applied is defined by the Software Safety Classification. For example, coverage analysis is used in a variety of industries as a measure of test effectiveness. However, there are different levels of coverage analysis, from basic Statement Coverage (SC) to the in-depth Modified Condition/Decision Coverage (MC/DC). For Class A software systems, SC is likely appropriate, but for Class C software systems, MC/DC is probably the better choice however this level of rigor requires much more effort in defining, executing and documenting the test process and results.

How Software of Unknown Pedigree (SOUP) is Addressed by the Standard
All of the software in a medical device must be developed with the same rigor. For in-house software, this is straightforward, but one of the major areas of concern for IEC 62304 is how to treat Software Of Unknown Pedigree, or SOUP. A good example of a SOUP component is any one of the Free Open Source Software (FOSS) components that are available, or a Commercial Off The Shelf (COTS) software component. For a variety of reasons, projects are often under pressure to include SOUP components in their high integrity systems. The problem is, the development rigor used to develop a SOUP component cannot be guaranteed and/or the SOUP components may have unknown or no safety–related properties.

Great care must be taken in choosing a SOUP component to ensure that their related risks are taken into account, and the safety objectives of the system under development are met. At the most basic, the safety-involved parts of the system must be architecturally insulated from the SOUP and its potential undesirable effects. However, the best bet is to use SOUP components from vendors who are able to share their detailed processes and complete development histories to help substantiate claims about the quality of the software as a whole.

Next page >

Ensuring that the Defined Rigor is Adhered to Throughout the Software Development Process
Oncethe software safety classification is determined, be sure to choose theright tools at the start. This ensures that the same level of rigor isapplied throughout the process. All medical device software must undergorequirement management and traceability analysis throughout thesoftware development life cycle. An established set of verifiablerequirements is essential to define what is to be built, determine thatthe medical device software exhibits acceptable behavior, anddemonstrate that the completed medical device software is ready for use.

IEC 63204 demands that all software requirements be identified insuch a way that demonstrates traceability between the requirement andsoftware system testing, enabling developers to trace the risk controlmeasures to the software requirements. Requirements traceability iswidely accepted as a development best practice to ensure that allrequirements are implemented and that all development artefacts can betraced back to one or more requirements.

Figure1. RTM sits at the heart of the project defining and describing theinteraction between the design, code, test and verification stages ofdevelopment.

Figure 2 demonstrates how project managers can view and createproject requirements, and track them back to their original sources asenhancement requests. Developers can review requirements (and use casesif present) while they are designing software. Testers can get ajumpstart on testing activities by viewing project requirements directlyfrom their test management environment. Administrators can includerequirements when creating project baselines. Executives can receive“dashboard” views of projects states, gaining at-a-glance information onprogress. 

Figure2: The Requirements Traceability Matrix (RTM) plays a central role in adevelopment life cycle model. Artefacts at all stages of developmentare linked directly to the requirements matrix. Changes within eachphase automatically update the RTM so that overall development progressis evident from design through coding and test.

Applying the IEC 62304 standard to the development of medical devicesis voluntary, but helps to streamline the certification process; theFDA recognizes compliance with the IEC 62304 standard as fulfillment ofthe documentary evidence required for achieving certification. This isof particular importance where the use of SOUP is concerned. IEC 62304ensures that an appropriate level of rigor is applied throughout thesoftware development process, including SOUP components.

About this series
In the mid-1990s, a formalinvestigation was conducted into a series of fatal accidents with theTherac-25 radiotherapy machine. Led by Nancy Leveson of the Universityof Washington, the investigation resulted in a set of recommendations onhow to create safety-critical software solutions in an objectivemanner. Since then, industries as disparate as aerospace, automotive,and industrial control have encapsulated the practices and processes forcreating safety- and/or security-critical systems in an objectivemanner into industry standards. Although subtly different in wording andemphasis, the standards across industries follow a similar approach toensuring the development of safe and/or secure systems. This seriesdiscusses the ten phases defined in this common approach (follow thelinks to review earlier installments of this series):

  1. Perform a system safety or security assessment
  2. Determine a target system failure rate
  3. Use the system target failure rate to determine the appropriate level of development rigor
  4. Use a formal requirements capture process
  5. Create software that adheres to an appropriate coding standard
  6. Trace all code back to their source requirements
  7. Develop all software and system test cases based on requirements
  8. Trace test cases to requirements
  9. Use coverage analysis to assess test completeness against both requirements and code
  10. For certification, collect and collate the process artifacts required to demonstrate that an appropriate level of rigor has been maintained.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.