Building a software test and regression plan
Coming up with a Test Plan is the hardest job in a product company. Engineers like to develop cool products; marketing likes to tout the list of features; and QA/Test lingers in the background with little investment and a very tiny place in the schedule. Productivity is only as great as the quality of your product.
This article discusses our experience at Mirabilis Design in developing a comprehensive test and support plan. The complexity of the software required that we split the testing into sections- Graphical User Interface, feature library, Documentation, Web Links, Simulator or analytics and Database. This allowed some of the tests to be be fully automated while others require visual inspections.
The purpose of the test plan is to maximize the product quality, test the largest number of operating scenarios, ensure that models are upward compatible and identify incorrect operations.
In addition, the Test Plan is designed to address several unique features. In the case of our software these were mixed domains simulation and hierarchical considerations for memory, virtual connection, and virtual machine operations. In our discussion of the plan we came up with we will describe these aspects of software testing:
1. Background, Scope, Defects and Failures, Compatibility, Input Combinations and Preconditions, Static vs. Dynamic Testing, Software Verification and Validation, Software Testing Team, Software Quality Assurance (SQA)
2. Concept of Baseline Functionality
3. Baseline Libraries
4. New Libraries Based on Baseline Functionality
5. Regression Testing
6. Other Specific Block Issues to be Addressed
Mirabilis Design provides modeling and simulation solutions for exploring the performance and power consumption of applications running on complex embedded systems. Using the graphical environment, VisualSim, developers can create virtual environments to test their application performance and size the hardware platform based on metrics such as end-to-end response time, task deadline, scheduling schemes and reliability.
In our experience, as software evolves, the reemergence of faults is quite common. Sometimes it occurs because a fix gets lost through poor revision control practices (or simple human error in revision control). Often a fix for a problem will be "fragile". The update fixes the current problem in the narrow case where it was first observed but not in more general cases which may arise over the lifetime of the software.
Finally, it has often been the case that when some feature is redesigned, the same mistakes will be made in the redesign that were made in the original implementation of the feature.
Therefore, in most software development situations it is considered good practice that when a bug is located and fixed, a test that exposes the bug is recorded and regularly retested after subsequent changes to the program.
Although this may be done through manual testing procedures using programming techniques, it is often done using automated testing tools. Such a test suite contains software tools that allow the testing environment to execute all the regression test cases automatically; some projects even set up automated systems to automatically re-run all regression tests at specified intervals and report any failures (which could imply a regression or an out-of-date test). Common strategies are to run such a system after every successful compile (for small projects), every night, or once a week.
|Figure 1 Flow Diagram of the Regression Testing (Source: base77.com)|
Regression testing (Figure 1, above) is an integral part of the extreme programming software development method. In this method, design documents are replaced by extensive, repeatable, and automated testing of the entire software package at every stage in the software development cycle.
Traditionally, in the corporate world, regression testing has been performed by a software quality assurance team after the development team has completed work. However, defects found at this stage are the most costly to fix. This problem is being addressed by the rise of developer testing.
Although developers have always written test cases as part of the development cycle, these test cases have generally been either functional tests or unit tests that verify only intended outcomes. Developer testing compels a developer to focus on unit testing and to include both positive and negative test cases.