How I test software
Waterfall vs. agile
There's a tension between the classical, waterfall-diagram approach to design (separate phases for requirements, design, code, and test) and the iterative, spiral-development approaches. I'm 100% convinced that the waterfall model doesn't work. Never did. It was a fiction that we presented to the customer, all the while doing something else behind the scenes.
What's more, I can tell you why it doesn't work: we're not smart enough.
The waterfall approach is based on the idea that we can know, at the outset, what the software (or the system) is supposed to do, and how. We write that stuff down in a series of specs, hold requirements reviews to refine those specs, and don't start coding until those early phases are complete and all the specs are signed off. No doubt some academics can point to cases where the waterfall approach actually did work. But I can point to as many cases where it couldn't possibly have, for the simple reason that, at the outset of the project, no one was all that sure what the problem even was, much less the solution.
One of my favorite examples was a program I wrote way back in the Apollo days. It began as a simulation program, generating trajectories to the Moon. Put in an initial position/velocity state, propagate it forward in time, and see where it goes.
As soon as we got that part done, we realized it wasn't enough. We knew where we wanted the trajectory to go--to the Moon. But the simulated trajectory didn't go there, and we had no idea how to change the initial conditions (ICs) so it would.
We needed an intelligent scheme to generate the proper ICs. As it happens, the geometry of a lunar trajectory involves a lot of spherical trig, and I discovered that I could codify that trig in a preprocessor that would help us generate reasonable ICs or, at the very least, tell us which ICs would not work. I wrote a preprocessor to do that, which I imaginatively named the initial conditions program.
Using this program, we were generating much better trajectories. Better, but not perfect. The problem is that, after all, a real spacecraft isn't so accommodating as to follow nice, simple spheres, planes, and circles. Once we'd hit in the general vicinity of the desired target, we still had to tweak the IC program to get closer.
A problem like this is called a two-point boundary value (TPBV) problem. It can only be solved by an iterative approach called differential correction. We needed a differential correction program. So we wrote one and wrapped it around the simulation program, which wrapped itself around the initial conditions program.
Just as we thought we were done, along came the propulsion folks, who didn't just want a trajectory that hit the target. They wanted one that minimized fuel requirements. So we wrapped an iterative optimizer around the whole shebang.
Then the mission planning guys needed to control the lighting conditions at the arrival point and the location of the splashdown point back at Earth. The radiation guys needed a trajectory that didn't go through the worst of the Van Allen belts. The thermal guys had constraints on solar heating of the spacecraft, and so forth. So we embedded more software into the iterators and solvers--software that would handle constraints.
In the end, I had a computer program that needed almost no inputs from me. I only needed to tell it what month we wanted to launch. The nested collection of optimizers and solvers did the rest. It was a very cool solution, especially for 1962.
Here's the point. In those days, computer science was in its infancy. In no way could we have anticipated, from the outset, what we ended up with. We couldn't possibly have specified that final program, from the get-go. Our understanding of the problem was evolving right alongside the solutions.
And that's why we need iterative, spiral, or agile approaches.