I often do an exercise in the Rapid Software Testing class in which I ask people to catalog things that, for them, make testing harder or slower. Their lists fit a pattern I hear over and over from testers (you can see an example of the pattern in this recent question on Stack Exchange). Typical points include:
- I’m a tester working alone with several programmers (or one of a handful of testers working with many programmers).
- I’m under enormous time pressure. Builds are coming in continuously, and we’re organized on one- or two-week development cycles.
- The product(s) I’m testing is (are) very complex.
- There are many interdependencies between modules within the product, or between products.
- I’m seeing a consistent pattern of failures specifically related to those interdependencies; the tiniest change here can have devastating impact there—or anywhere.
- I believe that I have to run a complete regression test on every build to try to detect those failures.
- I’m trying to cope by using automated checks, but the complexity makes the automation difficult, the program’s testing hooks are minimal at best, and frequent product changes make the whole relationship brittle.
- The maintenance effort for the test automation is significant, at a cost to other testing I’d like to do.
- I’m feeling overwhelmed by all this, but I’m trying to cope.
On top of that,
- The organization in which I’m working calls itself Agile.
- Other than the two-week iterations, we’re actually using at most two other practices associated with Agile development, (typically) daily scrums or Kanban boards.
Oh, and for extra points,
- The builds that I’m getting are very unstable. The system falls over under the most basic of smoke tests. I have to do a lot of waiting or reconfiguring or both before I can even get started on the other stuff.
How might we consider these observations?
We could choose to interpret them as problems for testing, but we could think of them differently: as test results.
Test results don’t tell us whether something is good or bad, but they may inform a decision, or an evaluation, or more questions. People observe test results and decide whether there are problems, what the problems are, what further questions are warranted, and what decisions should be made. Doing that requires human judgement and wisdom, consideration of lots of factors, and a number of possible interpretations.
Just as for automated checks and other test results, it’s important to consider a variety of explanations and interpretations for testing meta-results—observations about testing. If we don’t do that, we risk missing important problems that threaten the quality of testing effort, and the quality of the product, too.
As Jerry Weinberg points out in Perfect Software and Other Illusions About Testing, whatever else something might be, it’s information. If testing is, as Jerry says, gathering information with the intention of informing a decision, it seems a mistake to leave potentially valuable observations lying around on the floor.
We often run into problems when we test. But instead of thinking of them as problems for testing, we could also choose to think of them as symptoms of product or project problems—problems that testing can help to solve.
For example, when a tester feels outnumbered by programmers, or when a tester feels under time pressure, that’s a test result. The feeling often comes from the programmers generating more work and more complexity than the tester can handle without help.
Complexity, like quality, is a relationship between some person and something else. Complexity on its own isn’t necessarily a problem, but the way people react to it might be. When we observe the ways in which people react to perceived complexity and risk, we might learn a lot.
- Do we, as testers, help people to become conscious of the risks—especially the Black Swans—that typically accompany complexity?
- If people are conscious of risk, are they paying attention to it? Are they panicking over it? Or are they ignoring it and whistling past the graveyard? Or…
- Are people reacting calmly and pragmatically? Are they acknowledging and dealing with the complexity of the product?
- If they can’t make the product or the process that it models less complex, are they at least taking steps to make that product or process easier to understand?
- Might the programmers be generating or modifying code so quickly that they’re not taking the time to understand what’s really going on with it?
- If someone feels that more testers are needed, what’s behind that feeling? (I took a stab at an answer to that question a few years back.)
How might we figure that out answers to those questions? One way might be to look at more of the test results and test meta-results.
- Does someone perceive testing to be difficult or time-consuming? Who?
- What’s the basis for that perception? What assumptions underlie it?
- Does the need to investigate and report bugs overwhelm the testers’ capacity to obtain good test coverage? (I wrote about that problem here.)
- Does testing consistently reveal consistent patterns of failure?
- Are programmers consistently surprised by such failures and patterns?
- Do small changes in the code cause problems that are disproportionately large or hard to find?
- Do the programmers understand the product’s interdependencies clearly? Are those interdependencies necessary, or could they be eliminated?
- Are programmers taking steps to anticipate or prevent problems related to interfaces and interactions?
- If automated checks are difficult to develop and maintain, does that say something about the skill of the tester, the quality of the automation interfaces, or the scope of checks? Or about something else?
- Do unstable builds get in the way of deeper testing?
- Could we interpret “unstable builds” as a sign that the product has problems so numerous and serious that even shallow testing reveals them?
- When a “stable” build appears after a long series of unstable builds, how stable is it really?
Perhaps, with the answers to those questions, we could raise even more questions.
- What risks do those problems present for the success of the product, whether in the short term or the longer term?
- When testing consistently reveals patterns of failures and attendant risk, what does the product team do with that information?
- Are the programmers mandated to deliver code? Or are the programmers mandated to deliver code with a warrant that the code does what it should (and doesn’t do what it shouldn’t), to the best of their knowledge? Do the programmers adamantly prefer the latter mandate?
- Is someone pressuring the programmers to make schedule or scope commitments that they can’t really fulfill?
- Are the programmers and the testers empowered to push back on scope or schedule pressure when it adds to product or project risk?
- Do the business people listen to the development team’s concerns? Are they aware of the risks that testers and programmers bring to their attention? When the development team points out risks, do managers and business people deal with them congruently?
- Is the team working at a sustainable pace? Or is the product and the project being overwhelmed by complexity, interdependencies, fragility, and problems that lurk just beyond the reach of our development and testing effort?
- Is the development team really Agile, in the sense of the precepts of the Agile Manifesto? Or is “agility” being used in a cargo-cult way, using practices or artifacts to mask over an incoherent project?
Testers often feel that their role is to find, investigate, and report on bugs in a running software product. That’s usually true, but it’s also a pretty limited view of what testers could test. A product can be anything that someone has produced: a program, a requirements document, a diagram, a specification, a flowchart, a prototype, a development process mode, a development process, an idea. Testing can reveal information about all of those things, if we pay attention.
When seen one way, the problems that appear at the top of this article look like serious problems for testing. They may be, but they’re more than that too. When we remember Jerry’s definition of testing as “gathering information with the intention of informing a decision”, then everything that we notice or discover during testing is a test result.
This post was edited in small ways, for clarity, on 2017-03-11.