Software Standards Compliance 101: Matching system target failure rates to development rigor

August 14, 2015

LDRA_jay-August 14, 2015

Editor's Note: After a series of fatal accidents, a formal investigation resulted in a recommendations on how to create safety-critical software solutions using a phased approach (see "About this series" at the end of this article). The first phase, Perform a system safety or security assessment, was discussed in "Software Standards Compliance 101: First assess your system’s risk," the first article in this series. This article discusses Phase 2, Determine a target system failure rate, and Phase 3, Use the system target failure rate to determine the appropriate level of development rigor, with particular attention on how the IEC 62304 standard for Medical device software – Software life cycle processes approaches these objectives. Subsequent articles in this nine-piece series will address the other phases. 

In 2014, the FDA released a Medical Device Report that analyzed medical device recalls between 2003 and 2012. This report revealed that recalls increased by 97% during this period, naming device software design as a leading cause. To counter this trend, the International Electrotechnical Commission (IEC) introduced the IEC 62304 standard Medical device software – Software life cycle processes In 2006. This standard codified the current state of practice in developing software for medical devices. In doing so, IEC 62304 established a common framework for medical device software life cycle processes that today is necessary for the safe design and maintenance of such applications.

Rather than being an all-encompassing standard, IEC 62304 works in harmony with other international standards. In particular, it is assumed that medical device software is developed and maintained within a quality management system and a risk management system, the latter of which is specifically addressed in the standard by reference to ISO 14971; Medical devices -- Application of risk management to medical devices.

Using Risk to Define Medical Device Classification
Like many of the industry standards that impact the development of software for safety-critical systems, the level of rigor applied to the software development of medical devices is driven by a risk assessment. This assessment determines how likely it is that a patient will be exposed to a particular hazard. Consider, for example, a radiotherapy machine. The radiation that these machines generate can be extremely intense, causing severe injury or death if overexposure occurs. The risk that a patient or therapist will be exposed to this hazard is determined by the severity of the potential injury that may be caused by the hazard in combination with the probability that the exposure will occur.

When building any medical device that includes software, developers must first determine whether software is a contributing factor to a hazard, either directly or indirectly. This is achieved by performing an ISO 14971 compliant risk assessment of the key functionality provided by the software. Simply put, this is a measure of whether the software system can contribute to a hazard that can affect the patient, operator or anyone else. Software developers can then classify software components as well as the software as a whole, based on the level of hazard presented to the patient and/or the device’s users. IEC 62304 defines three software safety classifications:

There is no one-size-fits-all process for developing software for a medical device; the Software Safety Classification merely defines the level of rigor that must be applied. This makes logical and practical sense; there is little business justification to exhaustively test Class A software, but the highest levels of rigor must be applied for Class C devices.

Control Development Rigor with System and Safety Classification
One of the best examples of how rigor is applied throughout the software development life cycle is software testing. It is a commonly accepted principle that testing software 100% simply isn’t possible. For example, in one of his lectures in An Introduction to Computers and Programming course in 2004, Prof. I. K. Lundqvist of MIT described a simple program containing five decision points that are looped through 20 times. To exhaustively test this program would require the execution of all possible execution paths (i.e., 520 in total). If tests were written to test one of these execution paths every millisecond, then the total time required to test this program would be over 3,000 years!

Clearly, exhaustive testing of software is not possible, but how much testing is enough? The answer lies in the checks and measures that can be used to assess overall test effectiveness, starting at the software requirements definition phase.

Starting with the requirements capture, higher levels of Software Safety Classification require a greater number of tasks and more detailed documentation. The need for every software component must be defined during the Software Development Planning phase and then traced through all of subsequent stages of the development life cycle, including:

  • Software requirements analysis
  • Software architectural design
  • Software detailed design
  • Software unit implementation and verification
  • Software integration and integration testing
  • Software system testing
  • Software release

At each phase, the level of development rigor applied is defined by the Software Safety Classification. For example, coverage analysis is used in a variety of industries as a measure of test effectiveness. However, there are different levels of coverage analysis, from basic Statement Coverage (SC) to the in-depth Modified Condition/Decision Coverage (MC/DC). For Class A software systems, SC is likely appropriate, but for Class C software systems, MC/DC is probably the better choice however this level of rigor requires much more effort in defining, executing and documenting the test process and results.

How Software of Unknown Pedigree (SOUP) is Addressed by the Standard
All of the software in a medical device must be developed with the same rigor. For in-house software, this is straightforward, but one of the major areas of concern for IEC 62304 is how to treat Software Of Unknown Pedigree, or SOUP. A good example of a SOUP component is any one of the Free Open Source Software (FOSS) components that are available, or a Commercial Off The Shelf (COTS) software component. For a variety of reasons, projects are often under pressure to include SOUP components in their high integrity systems. The problem is, the development rigor used to develop a SOUP component cannot be guaranteed and/or the SOUP components may have unknown or no safety–related properties.

Great care must be taken in choosing a SOUP component to ensure that their related risks are taken into account, and the safety objectives of the system under development are met. At the most basic, the safety-involved parts of the system must be architecturally insulated from the SOUP and its potential undesirable effects. However, the best bet is to use SOUP components from vendors who are able to share their detailed processes and complete development histories to help substantiate claims about the quality of the software as a whole.

Next page >

< Previous
Page 1 of 2
Next >

Loading comments...