What you need to know about automated testing and simulation
Wedding simulation with automated testing allows test organizations to achieve benefits such as increases in testing speed (through-put), increases in test coverage for both hardware and software, and the ability to test before hardware becomes available. In this article, we will describe each type of approach in turn and then how they can work together synergistically.
Simulation generally refers to a model of a process or function; for example, we can simulate the general behavior of a manufacturing process, a motor vehicle, or any other object for which we have knowledge about inputs, outputs, and behavior.
Both simulation and testing have specific goals. For simulation, we want to facilitate requirements generation, uncover unknown design interactions and details, and reduce development cost by having fewer actual parts.
Much of this activity facilitates testing in quantifying the requirements, making testing more productive (Figure 1, below). For testing, we want to achieve defect containment, reduced product warranty cost, and some level of statistical indication of design readiness.
|Figure 1 Simulation Uses|
Automated testing involves the following components:
* The use of scripts to drive tests
* Hardware to support the scripts
* The use of other scripts to record results
* A theory or philosophy of testing
* The ability to detect faults
In short, automated testing is nearly always a hardware and software proposition.
Scripting. The automated test team can use scripting languages such as Ruby, Python, Perl or other languages so long as they have toolboxes to help drive the hardware side of the process. We have also used Visual Basic for Applications driving a spreadsheet and hooked to one of Scilab, Matlab, or Labview as a test and documentation driver.
The bottom line is that the driver must be sophisticated enough to run the tests unaided and personnel must be appropriately skilled in order to design, test, and execute the code.
We can record the results of our testing using the script language we used to execute the tests. These results are recorded objectively by measuring pre-defined outputs and known failure conditions against requirements.
A sophisticated test would also account for previously unknown failure conditions by flagging any behavior outside an expected range as a fault. The script language also writes results to the test plan which creates the report, and then the script publishes report.
Hardware tools. While implementing automated testing, it is necessary to use a variety of tools, including:
* Cameras for visual indication
* Mass media storage for data link, analog and digital information
* Scopes and meters
* Real system hardware
* Actual product hardware
* Mechanical actuators
* Temperature/humidity test boxes
A shopping list like this can make the hardware portion of automated testing expensive. Hence, we need to always ensure that automated testing provides the untiring speed and correctness that we cannot achieve with human labor. Comparing hours to test to material cost to test will provide some indication regarding investment.
Testing theory. We believe it is wise to establish a testing theory or approach to unify the test plans and provide a rationale for the architecture of the test suites.
In general, we would expect to see an element of compliance testing which is executed to written and derived requirements and consists of routine or expected actions.
An extension to this type of testing is combinatorial testing, wherein all inputs receive stimulation and the expected response values are known. When properly designed, combinatorial testing may also elicit failures from factor interactions.
Finally, we expect to see destructive testing (in the lab, but not in production), where we will overstress the product beyond specification or design limits in order to characterize the design, failure, and destruction limits.
Detecting faults. Automated test equipment must be able to detect faults or deviations from the expected response. We can accomplish this through clear identification of individual pass / fail criteria.
In some cases, we may believe we have identified all failure modes; in other cases, anything that is not nominal should be flagged for review. Often, in order to detect faults, the automated tester may need the ability to do optical character reading, calibration/movement detection with spec limits, sensing of signal limits, and color and illumination detection to name a few.