Wedding simulation with automated testing allows test organizations toachieve benefits such as increases in testing speed (through-put),increases in test coverage for both hardware and software, and theability to test before hardware becomes available. In this article, wewill describe each type of approach in turn and then how they can worktogether synergistically.
Simulation generally refers to a model of a process or function; forexample, we can simulate the general behavior of a manufacturingprocess, a motor vehicle, or any other object for which we haveknowledge about inputs, outputs, and behavior.
Both simulation and testing have specific goals. For simulation, wewant to facilitate requirements generation, uncover unknown designinteractions and details, and reduce development cost by having feweractual parts.
Much of this activity facilitates testing in quantifying therequirements, making testing more productive (Figure 1, below ).For testing, we want to achieve defect containment, reduced productwarranty cost, and some level of statistical indication of designreadiness.
|Figure1 Simulation Uses|
Automated testing involves the following components:
* The use of scripts to drive tests
* Hardware to support the scripts
* The use of other scripts to record results
* A theory or philosophy of testing
* The ability to detect faults
In short, automated testing is nearly always a hardware and softwareproposition.
Scripting. The automated test team can use scriptinglanguages such as Ruby, Python, Perl or other languages so long as theyhave toolboxes to help drive the hardware side of the process. We havealso used Visual Basic for Applications driving a spreadsheet andhooked to one of Scilab, Matlab, or Labview as a test and documentationdriver.
The bottom line is that the driver must be sophisticated enough torun the tests unaided and personnel must be appropriately skilled inorder to design, test, and execute the code.
We can record the results of our testing using the script languagewe used to execute the tests. These results are recorded objectively bymeasuring pre-defined outputs and known failure conditions againstrequirements.
A sophisticated test would also account for previously unknownfailure conditions by flagging any behavior outside an expected rangeas a fault. The script language also writes results to the test planwhich creates the report, and then the script publishes report.
Hardware tools. While implementing automated testing, it isnecessary to use a variety of tools, including:
* Cameras for visual indication
* Mass media storage for data link, analog and digital information
* Scopes and meters
* Real system hardware
* Actual product hardware
* Mechanical actuators
* Temperature/humidity test boxes
A shopping list like this can make the hardware portion of automatedtesting expensive. Hence, we need to always ensure that automatedtesting provides the untiring speed and correctness that we cannotachieve with human labor. Comparing hours to test to material cost totest will provide some indication regarding investment.
Testing theory. We believe it is wise to establish a testingtheory or approach to unify the test plans and provide a rationale forthe architecture of the test suites.
In general, we would expect to see an element of compliance testingwhich is executed to written and derived requirements and consists ofroutine or expected actions.
An extension to this type of testing is combinatorial testing,wherein all inputs receive stimulation and the expected response valuesare known. When properly designed, combinatorial testing may alsoelicit failures from factor interactions.
Finally, we expect to see destructive testing (in the lab, but notin production), where we will overstress the product beyondspecification or design limits in order to characterize the design,failure, and destruction limits.
Detecting faults. Automated test equipment must be able todetect faults or deviations from the expected response. We canaccomplish this through clear identification of individual pass / failcriteria.
In some cases, we may believe we have identified all failure modes;in other cases, anything that is not nominal should be flagged forreview. Often, in order to detect faults, the automated tester may needthe ability to do optical character reading, calibration/movementdetection with spec limits, sensing of signal limits, and color andillumination detection to name a few.
Objectives of simulation
Simulations can be performed to analyze the behavior of a system. Notall simulations are used for automated testing. Regardless, generalobjectives consist of:
* Evaluation of various design concepts quickly without materialinvestment
* Demonstration of system integration (peer reviews and customerfeedback)
* Refinement of design concepts by predicting effects of performanceparameters on system behavior
* Verification that the simulated object performs correctly across awide range of nominal and fault scenarios
* Identification of key variables that impact the system andrealization of the system implications, particularly with respect topotential interactions
* Reliability consequences
* Theoretical simulation results as reference for the practicalverification.
Simulation is not simply hardware done in software. Often, we havevarious mixes of hardware and software. For example, constructivesimulators are purely computational, with all elements, including thehardware, simulated on a computer.
On the other hand, we can have virtual simulators, wherein we usepartial simulation in hardware and other parts of the system or systemssimulated in pure software. Finally, we might use live simulation,which provides live, contrived exercises and is often used to stresssystem limits (e.g., aircraft and vehicle dynamics simulators).
The military use of live fire is a form of simulation and it hasanalogues in other test environments. It requires a set of artificiallycontrived demands upon the system, which start at nominal and becomeprogressively severe.
An example of these applied to civilian use, could be vehiclestability testing via interaction with other vehicles and obstacles. Inmost cases, we are trying to get close to real life conditions.
To begin to develop a simulation, it is important to go through thefollowing process flow:
* Identify the simulation goals for our specific project
* Prepare for simulation by:
* Identifying parameters
* Modeling parameters
* Run the prototype simulation* Perform the test (physical test to compare actual performance tosimulation results)
* Gather data
* Compare test results with model parameters
* Identify new parameters needed
* Re-run simulation if needed and gather data again
* Analyze data
* Update models
* Determine if additional parameters are necessary
* Review the range of parameter values as a sanitycheck
* Design updates
Clearly, we may need several iterations before developing asimulator / simulation that provides sufficient verisimilitude to bevaluable, as illustrated in Figure 2 below.
|Figure2 Simulation Activities|
In making a decision about simulation, it is necessary to evaluate thetrade-offs among three kinds of simulations models:
* Discrete-event simulation
* Agent-based simulation
* Real-time simulation
Discrete-event simulation. With discrete-event simulation,events occur chronologically but not necessarily in real-time. Thesimulator responds to events as if it were a state machine withspecific events triggering changes of state.
This kind of simulator is often used for accelerated analyses offactors in the simulation model. It is also commonly used for automatedtesting. Some example of discrete-event simulators are the commercialmanufacturing plant simulator ARENA and the open source tool SimPy,which is a Python-based simulator.
Agent-based simulation. With agent-based simulation the focusis less on events and more on the behavior of agents. The agentsfunction autonomously or semi-autonomously. We can achieve complexbehavior from simple rules as well as emergent behavior from simplerules and relatively few components.
Examples of uses of agent-based simulation are ant colonyoptimization and swarm optimization. One open source tool foragent-based simulation is the Netlogo language.
Real-time simulation. Real-time simulators are often used fortraining purposes—speeding them up can improve participant reflexes. Atypical example is a flight simulator used to train or refresh pilots.
In real-time simulator, events occur in correspondence with actualconditions. A simple example is the Microsoft Flight Simulator. Oftenhardware-in-the-loop simulations attempt to come as close to real-timesimulation as possible.
Simulation as test preparation
We can also use simulation as a tool for the preparation of all kindsof tests, including automated testing. This approach extends beyondrequirement elicitation. The factors involved are the following:
* Set up test scenarios
* Set up test environment
* Identify key test measurable and instrumentation needs
* Human resource needs
* Material resource needs
* Test sequencing
* Identification of “passing” criteria
Conflict between Simulation and test results
Because simulators are actually instantiations of models, we willoccasionally see a conflict between the abstraction of the model andthe reality of actual product testing.
On the testing side, we would review our test assets, ourmeasurement methods, our tactics, and the operational environment. Onthe simulator side, we would review the model for accuracy; identifyany missed parameters as well as the range of the previously identifiedparameters.
We divide simulation the following categories of complexity andcomprehensiveness:
* Scripting or programming to provide realistic stimuli forhardware/software
* Occasional special hardware
* Different levels
* Light simulation
* Medium simulation
* Heavy simulation
* Distributed simulation
Light simulation. What we call 'light' simulation occurs insoftware when the software engineer 'feeds' data to new routinesthrough the argument list, builds a wrapper to provide simulated datato routines, and white box testing permitted.
White box testing occurs when the tester knows the internals of thefunction under test. In this instance, we would monitor the impact ofthis simulated information's impact within the various softwareroutines.
In black box testing, we only know the inputs and outputs and weobserve the behavioral changes of the outputs as the input values arechange. In this instance, the simulation results on the output are whatis critiqued.
Medium simulation. With 'medium' simulation, we can havethe hardware simulated or emulated. The stimuli will be apparently fromoutside the product code and/or hardware under test and theinteractions should be detectable. In some cases, we are using theactual hardware for part of the test or study activity and software forthe remainder.
Heavy simulation. Under 'heavy' simulation all potentialinput devices are simulated or in hardware—in some cases all devicesare simulated using software. All potential input ranges are exercisedon main system under study and there will be no white box testing.
Distributed simulation. Distributed simulators involvedgeographically-separated components communicating across a network. Infact, some devices may not reside on the test bench or in a laboratory.We would expect to see multiple simulators that stimulate the unitunder test running on different systems.
Most often, these kinds of simulator systems are used in Departmentof Defense scenarios. Some major commercial vehicle and automotivecompanies will use these kinds of systems to simulate multiplecontrollers or ECU (electronic control units) on a data bus.
As with all test equipment, the simulator must be validated; that is,we must compare our model with reality to ensure adequate levels ofverisimilitude. In the case of a supplier, it is wise to solicitcustomer input. We suggest that the behavior of existing subsystemsmust be known thoroughly.
The simulator must be good enough but it does not have to betterthan that; in short, it needs accurate signals and the ability torandomize or add noise to behavior.
If we go beyond a certain point, the simulator ceases to simulateand becomes the actual device simulated, which misses thecost-effectiveness and malleability of using a simulator in the firstplace. This setup then becomes distributed systems testing.
Simulation and automated testing together
Finally, we use automated testers—predominantly hardware—andsimulators—principally software—to enable automated testers to provideearly warning of issues during product or service development (Figure3, below ).
Simulators can save money when the hardware is difficult to acquireand the hardware is expensive. We believe that the use of automatedtesting and simulation in tandem is a good marriage.
|Figure3 Simulation / Verification Over Project duration|
As we have seen, automated testing, when used wisely, speeds up routinetesting, confirms numerous design permutations, may be used withcombinatorial testing to discover interactions, and can be executedfull-time (24/7) because the machines don't tire.
Simulation allows for requirements identification, evokes unknowninteractions, provides for testing before hardware delivery, canexecute “what if” scenarios, and allows complete control of stimuli.
Together, automated testing and simulation provide a powerful toolfor executing tests, eliciting problems, and characterizing newproducts
Kim H. Pries is Director of Product Integrity andReliability/Quality Management System at StoneridgeElectronics – North America, where he is responsible for all testand evaluation activities including laboratory, calibration,hardware-in-the-loop software testing, and automated test equipment.Jon Quigley is EE Systems and Verification Manager at Volvo 3P.