Ensuring product quality is not accomplished solely through testing and verification activities. Testing is but a fraction of the techniques that are at an organization’s disposal to improve their development quality. Good planning of the product incarnations; that is, a phased and incremental delivery of the feature content, makes it possible for an organization to employ test, inspections, and evaluation as tools for competitive advantage. To really improve (and prove product quality), a more comprehensive approach is required (Figure 1 ).
Figure 1: Key elements in supporting product design and delivery are the use of test, inspection and evaluation at every step of the process.
Described in this article is the Test, Inspection & Evaluation Master Plan Organized (TIEMPO) which extends the Test and Evaluation Master Plan (TEMP) specified in IEEE 1220 and MIL-STD-499B (Draft) to support product quality.
TIEMPO expands the concept of a Test and Evaluation Master Plan by focusing on staged deliveries with each product/process release being a superset of the previous release. We can consider these iterations our development baselines closely linked to our configuration management activities. Each package is well defined; likewise the test, inspection and evaluation demands are well defined for all iterations. Ultimately, the planned product releases are coordinated with evaluation methods for each delivery. Under TIEMPO the inspections are:
Philosophy of the Master Plan
At its core, TIEMPO assists in coordinating our product functional growth. Each package has a defined set of contents, to which our entire battery of quality safeguarding techniques will be deployed. This approach—defined builds of moderate size and constant critique—has a very agile character, allowing for readily available reviews of the product.
In addition, TIEMPO reduces risk by developing superset releases wherein each subset remains relatively untouched. Most defects will reside in the new portion (the previously developed part of the product or process is now a subset and proven defect-free).
Should the previous iteration contain unresolved defects, we will have had the opportunity between these iterations to correct these defects. Frequent critical reviews are utilized to guide design and to find faults.
The frequent testing facilitates quality growth and reliability growth and gives us data from which we can assess the product readiness level (should we launch the product). Experience suggests the following benefits arise:
- Well planned functional growth in iterative software and hardware packages
- Ability to prepare for test (known build content), inspection and evaluation activities based upon clearly-identified packages
- Linking test, inspection and evaluations to design iterations (eliminate testing or inspecting items that are not there)
- Reduced risk
- Identification of all activities to safeguard the quality—even before material availability and testing can take place
- Ease of stakeholder assessment, including customer access for review of product evolution and appraisal activities
Experience also indicates that at least 15% of the time associated with downstream trouble shooting is wasted in unsuccessful searches for data simply due to lack of meaningful information associations with the developmental process baselines. The TIEMPO approach eliminates this waste ferreting out issues earlier in the process and allowing more dollars for up front product refinement.
How TIEMPO works
This article will describe how each of these pieces fit together. TIEMPO need not be a restricted only phase-oriented product development, but any incremental and iterative approach will be the beneficiary of a constant critique including entrepreneurial activities.
System Definition The TIEMPO document and processes begin with the system definition. Linked to the configuration management plan it will describe the iterative product development baselines that build up to our final product function entirety. In other words, we describe each of our incarnations of the products in each package delivery. We are essentially describing our function growth as we move from little content to the final product. Each package will have incremental feature content and bug fixes from the previous iteration.
By defining this up front, we are able to specifically link the testing, inspection and evaluation activities to not only make an iteration but specific attributes of that iteration and capture it in an associative data map in our CM system.
In the example of testing, we know the specific test cases we will conduct by mapping the product instantiation with the specifications and ultimately to test cases. We do this through our configuration management activities. We end up with a planned road map of the product development that our team can follow. Of course, as things change we will again update the TIEMPO document through our configuration management actions.
Test or Verification Test or verification consists of those activities typically associated with determining whether the product meets the specification or original design criterion. If an incremental and iterative approach is applied, prototype parts are constantly compared against specifications. The parts will not be made from entirely production processes but will increase in level of production content as the project progresses and we approach our production intent.
Though prototype parts may not represent production in terms of durability, they should represent some reasonable facsimile of the shape and feature content congruent with the final product. We use these parts to reduce the risk by not jumping from idea to final product without learning along the way.
We should learn something from this testing to use to weigh the future quality of the resultant product. It is obvious how testing fits into TIEMPO. However, there are some non-obvious opportunities to apply TIEMPO. We could use inspection as a form of test plan. Did we get the testing scope correct? We can also use this inspection technique on our test cases, analyzing if we will indeed stress the product in a way valuable to our organization and project, not to mention that we can inspect software long before it is even executable.
The feedback from this inspection process will allow us to refine the testing scope, test cases, or any proposed non-specification or exploratory-based testing. The testing relationships in a typical HW/SW release plan are shown below in Figure 2 .
Reliability Testing In the case of reliability testing, weassess the probable quality behavior of the product or process oversome duration. Finding failures in the field is a costly proposition,with returned parts often costing the producer to absorb from profitfive to ten times the sales price, not to mention the intangible cost ofcustomer dissatisfaction. For reliability testing, small sample sizesare used when a baseline exists (and we combine Weibull and Bayesiananalytical techniques) or larger sample sizes without a baseline.Physical models are used for accelerated testing in order to computeprobable product life. Inferior models will hamper our progress,especially when a baseline does not exist.
Our approach isspecified in the TIEMPO document, along with what specific packages(hardware / software) used to perform this activity (Figure 3 ). Thus our development and reliability testing are linked together via our configuration management work
So, when do we start testing?
Manymay believe that it is not possible to test a product without somemeasure of hardware or software samples available. It is possible totest if we have developed simulators to allow us to explore the productpossibilities in advance of the material or software. This requiresaccurate models as well as simulation capability.
To ensureaccurate models we will run tests between our model results and realworld results to determine the gap and make necessary adjustments to themodels. We may even use these tools to develop our requirements ifsophisticated enough. These activities reduce the risk and cost of theend design because we have already performed some evaluation of thedesign proposal.
As prototype parts become available, testing onthese parts is done alone or in concert with our simulators. If we havestaged or planned our function packages delivered via TIEMPO, we willtest an incrementally improving product.
When we get into theheavy lifting of the product or service testing, we have a variety ofmethods in our arsenal. At this stage we are trying to uncover anyproduct maladies we which not to be impacted by, nor our customer. Wewill use approaches such as:
- Compliance testing (testing to specifications)
- Extreme testing (what does it take to destroy and how does the product fail)
- Multi-stimuli or combinatorial testing
- Stochastic (randomized exploratory)
Reviews and Inspections Reviews are analogous to inspections. The goal of reviews is to findproblems in our effort as early as we can. There may be plenty ofassumptions that are not documented or voiced in the creation of theseproducts. The act of reviewing can ferret out the erroneous deleteriousones allowing us to adjust. We can employ a variety of review techniqueson our project and product such as:
- Concept reviews
- Product requirements reviews
- Specification reviews
- System design reviews
- Software design reviews
- Hardware design reviews
- Bill Of Materials
- Project and Product Pricing
- Test plan reviews
- Test case reviews
- Prototype inspections
- Technical and Users Manuals
- Failure Mode Effects Analysis (see immediately below)
DesignFailure Mode Effects Analysis (DFMEA) and the Process Failure ModeEffects Analysis (PFMEA) employed by the automotive industry can beapplied to any industry. These tools represent a formal andwell-structured review of the product and the production processes. Themethod forces to consider the failure mechanism and the impact. If wehave a historical record we can take advantage of that record or evenprevious FMEA exercises. There are two advantages; the first of which isthe prioritization of severity. The severity is a calculated numberknown as the Risk Priority Number (RPN) and is the result of the productof:
- Severity (ranked 1-10)
- Probability (ranked 1-10)
- Detectability (ranked 1-10)
Thelarger the resulting RPN, the higher the severity, we then prioritizeour addressing of these concerns first. The second advantage fits withthe testing portion of the TIEMPO. The FMEA approach links testing tothose identified areas of risk as well. We may alter our approach or wemay choose to explore via testing as an assessment to ourprediction. For example, let’s say we have a design that we believe mayallow water intrusion into a critical area. We may then elect to performsome sort of moisture exposure test to see if we are right about thisevent and the subsequent failure we predict.
Evaluation and Validation Evaluation can be associated with Validation. With these activities weare determining suitability of our product to meet the customersneed. We are using the product (likely a prototype) as our customerwould. If the prototype part is of sufficient durability and risk orseverity due to malfunction low, we may supply some of our closestcustomers with the product for evaluation.
Their feedback isused to guide the remaining design elements. This facilitates ouranalysis of the proposed end product. There are other ways to employevaluation. Consider the supplier or manufacturer of a product to acustomer for subsequent resale. We may perform a run-at-rate assessmenton the manufacturing line to measure probable production performanceunder nearly realistic conditions. Now, we are evaluating the line underthe stresses that would be there during production.
We may usethe pieces produced during the run-at-rate assessment to perform ourtesting. This approach is a reasonable, since the resulting parts willbe off the manufacturing line and built under comparable stresses tothose in full production. In fact, we may choose to use these parts inthe mentioned early customer evaluations.
Some inspection caveats By definition, an inspection is a form of quality containment, whichmeans trapping potential escapes of defective products or processes. Thefunction of inspection, then, is to capture substandard material and tostimulate meaningful remediation. Inspection for such items asspecification fiascoes prevents the defects from rippling throughimpending development activities where the correction is much morecostly.
The containment consists of updating the specificationand incrementing the revision number while capturing a “lessonlearned.” Reviews take time, attention to detail and an analyticalassessment of whatever is under critique, anything less will likelyresult in a waste of time and resources.
Product development phase considerations
Usuallythere are industry specific product development processes. Onehypothetical, generic model for such a launch process might look likethe following:
- System Level
- Preliminary Design
- Critical Design
- Test Readiness
- Production Readiness
- Product Launch
Forthe process outlined above, we could expect to see a test (T),inspection (I) and evaluation (E) per phase as indicated by the chart inFigure 4 . The design aspects will apply to process design just as much as to product or service design.
Thereis no one silver bullet. Test, inspection and evaluation areinstrumental to the successful launch of a new product. This is alsotrue for a major modification of a previously released product orservice.
We need not limit these to the product but can alsoemploy the techniques on a new process or even a developing a service.Both testing and inspection provide for verification and validation thatthe product and the process are functioning as desired. We learn as weprogress through the development.
If all goes well, we canexpect a successful launch. The automotive approach has been modifiedand used in the food and drug industry as the hazard analysis andcritical control point system. Critical control points are ofteninspections for temperature, cleanliness, and other industry-specificrequirements.
Jon M. Quigley , PMP CTFL, is a principaland founding member of Value Transformation, a product developmenttraining and cost improvement organization established in 2009, as wellas being Electrical / Electronic Process Manager at Volvo Trucks NorthAmerica. Jon has an Engineering Degree from the University of NorthCarolina at Charlotte, and two Master Degrees from City University ofSeattle. Jon has more than twenty years of product developmentexperience, ranging from embedded hardware and software throughverification and project management.
Kim Robertson startedhis first company at the age of 18 and has an extensive backgroundspanning forty years in all aspects of business and aerospace. He is theauthor of over 100 discipline specific training packages, three fictionbooks, and articles for various trade publications from industrial artsto Configuration Management. His interests in education and trainingdevelopment started in his teens. He is a NDIA certified ConfigurationManager with degrees from Westminster College in Mathematics andPhysical Sciences and a Master’s degree from the University of Phoenixin Organizational Management with a subspecialty of GovernmentContracts.
1. Pries, Kim H., Quigley, Jon M. Total Quality Management for Project Management , Boca Raton, FL CRC Press, 2013