DevelopsenseLogo

We Have to Run the Regression Tests!

This is a lightly-edited version of a post I made on LinkedIn, which in turn was a followup to the previous post.

“We have to run a full regression test suite on every build!”

First: you don’t have to do anything. There is no law of nature, nor any human regulation, that says you must repeat any particular test. You choose to do things. (You don’t have to automate. You can choose to use tools in all kind of ways… and that’s fine.)

In the Rapid Software Testing namespace, we say that regression testing is any testing motivated by change to a previously tested product.

When a product changes, risk is not evenly distributed. A brief pause for some analysis of where the changes might affect things can help focus your testing on plausible risk. You might choose to talk to developers, or designers, managers, or customer service people. You might want to pull out an architecture diagram. You might choose to run a profiling tool, to monitor network traffic or interprocess communication, or to examine logs for clues. You might want to examine the git log, or look at the source code.

Then test — that is, challenge — the idea that a change had only the desired effects in the area it was made, and didn’t introduce undesirable effects. Test things that might be connected to or influenced by that change. It might also make sense to do some testing in places where you believe risk is low — to reveal hidden risks.

Of course, you can also test a bit before you perform the analysis mentioned above. If you don’t have to do anything, you don’t have to do anything in any particular order, either.

If you want to find problems that matter, though, diversifying your techniques, tools, and tactics is essential. Rote repetition can limit you badly.

Obsession with looking for problems in exactly the same way as you’ve already looked displaces two things:

1) Your ability to find problems that were there all along, but that your testing has missed all along; and

2) Your ability to find new problems introduced by a change that your existing set of tests won’t cover.

Perform some tests that you’ve performed before, but don’t fixate them. Consider the gap between what you thought you knew about the system before the change, and what you need to know about the product as it is now. It’s the latter that’s the most important bit, and your old tests might not be up to the task.

Need help convincing management of this? Let me know.

Leave a Comment