Building Class III Medical Software apps on an Android Platform, Part 2: Developing an FDA-compliant framework

As noted in Part I of this series, many of our design decisions and the methodology adopted for developing an Android-based Class III medical software platform were based on the need to comply with the FDA guidance documents and external standards. Figure 13 shows the set of standards that drove the development methodology we used.

Figure 13: Standards driving the development process

Our development process for both production code and verification assets was based on the following essential elements:

Requirements. Every requirement is uniquely tagged.

Traceability. Every requirement must have at least one or more test cases. Additional traceability between design elements and the requirements is desirable but not required.

Accountability. Reviews are documented and comments of the review meeting attendees are recorded so that how and when decisions were made and who made them can be traced.

Change management. All product modifications are recorded in the form of change requests or change orders that capture all aspects of that change’s life cycle, including independent verification of the change.

Risk-based development process. The degree of code verification coverage of a specific software component is determined by the component’s risk level. Standard risk analysis techniques such as FMEA and fault trees were applied.

Peer review. All work output (requirements, design, code, test procedures) are peer reviewed by at least one independent reviewer.

Figure 14 illustrates the specific methodology that was employed at every Agile sprint until all features were completed. The verification assets were developed in parallel with the feature code development such that the requirements could be refined as early as possible and the development could be tested as soon as a build was available.

Figure 14: Software development methodology. Note that each iteration involves all the activities mentioned.

Ripple effects analysis (REA). One of the biggest challenges most software projects face is last minute requirement modifications. These can cause unpredictable delays and defects. In order to accommodate requirement changes, we established a checklist-based formal REA process that involved evaluating the specific requirement change and all the impacts it could have on the development and verification assets. This was peer reviewed by all the stakeholders to ensure that the change was adequately verified with minimal likelihood of side-effects and without an overly burdensome verification plan.

Outsourcing strategy
One of the challenges we encountered was the lack of Android and Java experience within the team and the relatively short deadline for project completion. Given these constraints, outsourcing was the only viable option. Nevertheless, as a team, we recognized the criticality of being able to internally maintain and extend the software over a long period (an approximate eight-year software platform lifecycle) independently from the outsource partner. Hence one of the goals of the outsourcing contract was to ensure that the deliverables were maintainable and extensible beyond the outsourcing contract. As shown in Figure 14 , some of the elements in our outsourcing strategy included the generation of a statement of work, creation of an Agile development process to support it, and coming up with ways to integrate various external outsourcing partners.

Statement of work (SOW). Since Class III medical software requires significant process control that the outsource (OS) partner did not have, the SOW was broken into three major phases. The first phase was a quick sprint of implementing a single low-complexity feature. The goal of this sprint was for the outsource partner to learn the specific process that the in-house (IH) partner wanted, as well as to derive a basis to estimate the work needed to complete all the deliverables necessary to complete the project.

The second phase was the development of all features and the development of the verification assets (automated and manual). This phase was broken up to eight sprints of one month duration each, where both the OS and IH teams collaborated to complete development and verification of all the features. The first sprint involved prototyping the architecture such that feature development could be done concurrently. Figure 15 illustrates the collaborative process that was employed for each of these sprints. The deliverables coming out of this process were as follows:

  • Design document for each feature and architectural component
  • Unit tests report indicating code and branch coverage for all components
  • Verification test case, protocol, and automated test script (if automated)
  • Source code (including unit test code)
  • Verification test execution report indicating results as well as log files of the execution
  • Screenshots for all screens in all languages (this was for the multi-lingual verification)
  • Automated test execution infrastructure and all test protocols and test cases
  • Work instructions for all process elements ranging from verification to software installation
  • Peer review reports for code/test case reviews
  • Validated set of screenshots per user experience testing using Android GUI widgets

The final phase was integration, verification, and validation, where thefocus was to put the developed application through an entire set ofverification and validation tests as described in Part 1 in this series.The SOW condition was that all tests in this phase must either pass or adeviation had to be formally documented as acceptable.

Click on image to enlarge.

Figure 15: Development Process within Each Sprint between In-House (IH) and OutSource Partner (OS)

Agile development process. During the second phase, the IH-OS team used the Agile development process as shown in Figure 16 .The set of features and components were planned for each sprint andverified towards the end of the sprint. The verification was initiallyperformed manually but once the product was correctly verified manually,then the verification procedure was automated.

Figure 16: Agile approach
The reasoning forthis strategy was to minimize rework on the verification scripts, asthe implementation and GUI are likely to change until the verificationprocedure passes manually. The bugs coming from the verification teamwere documented in Bugzilla (defect lifecycle management tool) andreviewed daily to be scheduled for root cause and resolution based onthe bug’s severity. Defects that affected requirements (GUI look andfeel) were escalated to the corporate product change management systemwhere they were reviewed cross-functionally and dis-positionedaccordingly.

Outsourcingchallenges.  This was the first substantial co-development outsourceproject from the IH partner and the first Class III medical deviceproject from the perspective of the OS partner. Hence, this broughtthree challenges for both teams:

Cultural differences. TheOS team was overseas and there were cultural nuances that needed to beaccounted for, for example differences in the degree of non-verbalcommunication

New process and quality methodology. The development process for a Class III medical device is dramaticallydifferent than other domains; for example, the attendance in everyreview meetings must be taken down on paper with signatures. This wasforeign to the OS partner and the importance of these seeminglyadministrative tasks needed to be reinforced. Nevertheless, theseactivities were critical in order to ensure a successful FDA audit aswell provide a traceable approach to ensure accountability towardsquality.

Lack of experience on the verification team. The verification team lacked experience in safety-critical real-timeembedded systems verification and the attention to detail necessary toensure repeatable verification results and a high quality verificationprotocol.

In order to address these challenges, both the IH andOS teams had a closed loop system in place where the process wascontinually enforced. Each lead engineer in the OS team had acounterpart in the IH team who was accountable for ensuring that theprocess was enforced and that all technical decisions received theappropriate review.

One or more IH team members were presentonsite throughout the project. Domain experts on the legacy platform andverification assets were also brought in during the early stages inorder to ramp up the learning for the OS team. This ensured that therewas always a direct contact who would review the OS team’s output andreinforce the quality control process at the OS site.

Both theOS and IH managers had weekly and sometimes daily meetings to ensure atight communication loop. Additionally, there were weekly verificationand development technical calls between the entire IH team and senior OSteam members.

Lessons learned about team buildup
Oneof the key lessons learned was how to build the team over time. Thethought was to maximize development and verification capacity byimmediately onboarding the complete Outsource team full-time. This wasdone prior to having a stable architecture and a provendevelopment/verification process resulting in some inefficiencies andthe necessity for some rework.

Instead, an alternative approachwould be to have a small set of principal engineers develop thearchitectural stack, the interfaces to the business logic features, andthe development/verification process/tools. The rest of the team ideallywould be brought onboard only when these elements are stabilized. Thiswould ensure that the new team members have a consistent development andverification framework to work from.

Thecombination of development methodology, outsourcing strategy, softwarearchitecture, and automated verification framework enabled a successfuloutcome. We believe that these concepts may be applicable for othercompanies facing similar challenges of porting their legacy technologyto the modern mobile tablet- or Smartphone-based platforms.

Read Part 1: Building Class III medical software apps on an Android platform; Part 1 – A case study

Sri Kanajan isa software engineering consultant with 11 years of experience withsafety critical embedded systems both in the automotive and medicaldevice fields. He has 18 publications and three best paper awards inthese categories. He has a MS in electrical engineering from Universityof Michigan and a MS in software engineering from Carnegie Mellon. Hecan be contacted at

Shrirang Khare iscurrently working with Persistent Systems Ltd. as software architect.He did his bachelors in science (electronics) followed by post graduatestudies in computer management. He has more than 12 years’ experienceworking in the wireless embedded and telecom industry developingmiddleware and application software. He specializes in working acrossthe entire software stack for mobile phones, tablets, and other embeddedplatforms and is experienced in working with tier one handsetmanufactures and leading ISVs.

Richard Jackson has been asoftware and systems engineer for over 20 years, specializing in realtime, mobile, and safety critical systems. He has worked for highprofile companies such as Microsoft, IBM, and Boston Scientific, and hasspoken at numerous industry conferences.

1. Image of Implanted S-ICD System
2. Object pool design pattern
3. FDA General Principles of Software Validation
4. Scrum Description
5. Smartphone and Tablet Penetration
6. Robotium , Android test automation framework

Thanksto the outstanding team at Boston Scientific and Persistent Systemsthat executed this project. Thanks also to Pankaj Chawhan, ShripadAgavekar from Persistent Systems who managed the India team and KenPersen who provided the strategic management and vision. Thanks also tothe principal engineers at Boston Scientific, Steven Donnelley, HaiHuang, Raetta Towers, Oscar Kuo, Shobana Venkatachelam and Weiguang Shaowho took the project from concept to implementation. Raghu Belegur fromPanaInfoTech deserves special mention as an editor as well.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.