I often do an exercise in the Rapid Software Testing class in which I ask people to catalog things that, for them, make testing harder or slower. Their lists fit a pattern I hear over and over from testers (you can see an example of the pattern in this recent question on Stack Exchange). Typical points include:
- I’m a tester working alone with several programmers (or one of a handful of testers working with many programmers).
- I’m under enormous time pressure. Builds are coming in continuously, and we’re organized on one- or two-week development cycles.
- The product(s) I’m testing is (are) very complex.
- There are many interdependencies between modules within the product, or between products.
- I’m seeing a consistent pattern of failures specifically related to those interdependencies; the tiniest change here can have devastating impact there—or anywhere.
- I believe that I have to run a complete regression test on every build to try to detect those failures.
- I’m trying to cope by using automated checks, but the complexity makes the automation difficult, the program’s testing hooks are minimal at best, and frequent product changes make the whole relationship brittle.
- The maintenance effort for the test automation is significant, at a cost to other testing I’d like to do.
- I’m feeling overwhelmed by all this, but I’m trying to cope.
On top of that,
- The organization in which I’m working calls itself Agile.
- Other than the two-week iterations, we’re actually using at most two other practices associated with Agile development, (typically) daily scrums or Kanban boards.
Oh, and for extra points,
- The builds that I’m getting are very unstable. The system falls over under the most basic of smoke tests. I have to do a lot of waiting or reconfiguring or both before I can even get started on the other stuff.
How might we consider these observations?
We could choose to interpret them as problems for testing, but we could think of them differently: as test results.
Test results don’t tell us whether something is good or bad, but they may inform a decision, or an evaluation, or more questions. People observe test results and decide whether there are problems, what the problems are, what further questions are warranted, and what decisions should be made. Doing that requires human judgement and wisdom, consideration of lots of factors, and a number of possible interpretations.
Just as for automated checks and other test results, it’s important to consider a variety of explanations and interpretations for testing meta-results—observations about testing. If we don’t do that, we risk missing important problems that threaten the quality of testing effort, and the quality of the product, too.
As Jerry Weinberg points out in Perfect Software and Other Illusions About Testing, whatever else something might be, it’s information. If testing is, as Jerry says, gathering information with the intention of informing a decision, it seems a mistake to leave potentially valuable observations lying around on the floor.
We often run into problems when we test. But instead of thinking of them as problems for testing, we could also choose to think of them as symptoms of product or project problems—problems that testing can help to solve.
For example, when a tester feels outnumbered by programmers, or when a tester feels under time pressure, that’s a test result. The feeling often comes from the programmers generating more work and more complexity than the tester can handle without help.
Complexity, like quality, is a relationship between some person and something else. Complexity on its own isn’t necessarily a problem, but the way people react to it might be. When we observe the ways in which people react to perceived complexity and risk, we might learn a lot.
- Do we, as testers, help people to become conscious of the risks—especially the Black Swans—that typically accompany complexity?
- If people are conscious of risk, are they paying attention to it? Are they panicking over it? Or are they ignoring it and whistling past the graveyard? Or…
- Are people reacting calmly and pragmatically? Are they acknowledging and dealing with the complexity of the product?
- If they can’t make the product or the process that it models less complex, are they at least taking steps to make that product or process easier to understand?
- Might the programmers be generating or modifying code so quickly that they’re not taking the time to understand what’s really going on with it?
- If someone feels that more testers are needed, what’s behind that feeling? (I took a stab at an answer to that question a few years back.)
How might we figure that out answers to those questions? One way might be to look at more of the test results and test meta-results.
- Does someone perceive testing to be difficult or time-consuming? Who?
- What’s the basis for that perception? What assumptions underlie it?
- Does the need to investigate and report bugs overwhelm the testers’ capacity to obtain good test coverage? (I wrote about that problem here.)
- Does testing consistently reveal consistent patterns of failure?
- Are programmers consistently surprised by such failures and patterns?
- Do small changes in the code cause problems that are disproportionately large or hard to find?
- Do the programmers understand the product’s interdependencies clearly? Are those interdependencies necessary, or could they be eliminated?
- Are programmers taking steps to anticipate or prevent problems related to interfaces and interactions?
- If automated checks are difficult to develop and maintain, does that say something about the skill of the tester, the quality of the automation interfaces, or the scope of checks? Or about something else?
- Do unstable builds get in the way of deeper testing?
- Could we interpret “unstable builds” as a sign that the product has problems so numerous and serious that even shallow testing reveals them?
- When a “stable” build appears after a long series of unstable builds, how stable is it really?
Perhaps, with the answers to those questions, we could raise even more questions.
- What risks do those problems present for the success of the product, whether in the short term or the longer term?
- When testing consistently reveals patterns of failures and attendant risk, what does the product team do with that information?
- Are the programmers mandated to deliver code? Or are the programmers mandated to deliver code with a warrant that the code does what it should (and doesn’t do what it shouldn’t), to the best of their knowledge? Do the programmers adamantly prefer the latter mandate?
- Is someone pressuring the programmers to make schedule or scope commitments that they can’t really fulfill?
- Are the programmers and the testers empowered to push back on scope or schedule pressure when it adds to product or project risk?
- Do the business people listen to the development team’s concerns? Are they aware of the risks that testers and programmers bring to their attention? When the development team points out risks, do managers and business people deal with them congruently?
- Is the team working at a sustainable pace? Or is the product and the project being overwhelmed by complexity, interdependencies, fragility, and problems that lurk just beyond the reach of our development and testing effort?
- Is the development team really Agile, in the sense of the precepts of the Agile Manifesto? Or is “agility” being used in a cargo-cult way, using practices or artifacts to mask over an incoherent project?
Testers often feel that their role is to find, investigate, and report on bugs in a running software product. That’s usually true, but it’s also a pretty limited view of what testers could test. A product can be anything that someone has produced: a program, a requirements document, a diagram, a specification, a flowchart, a prototype, a development process mode, a development process, an idea. Testing can reveal information about all of those things, if we pay attention.
When seen one way, the problems that appear at the top of this article look like serious problems for testing. They may be, but they’re more than that too. When we remember Jerry’s definition of testing as “gathering information with the intention of informing a decision”, then everything that we notice or discover during testing is a test result.
Here’s a follow-up to this post. (See also this discussion for an example of looking beyond the test result for possible product and project risks.)
This post was edited in small ways, for clarity, on 2017-03-11.
[…] Testing Problems Are Test Results […]
Great! As always MB.
Are we crossing into the realm of the ‘people’s definition’ of quality assurance?
Michael replies: I don’t know what you mean by “people’s definition”. I do know that quality assurance depends on having the power to assure it, which means either authorship or authority. Testers can’t claim either one, but we can help those who can.
Keep writing, coz I’ll keep reading.
Thank you!
Nice… I think I’m drawn to asking a question just to read your answer!
I guess I meant the ‘standard’ definition of QA. You know, the assurance throughout the lifecycle blurb.
However, you’re correct. Also thx for pointing to another great read.
That’s a great way to think of things. A tester’s job is to provide information and not just answer dichotomous questions. or as I like to put it, to boldly go where no user or programmer has gone before…
[…] : read the blog “Testing Problems Are Test Results” by Michael Bolton. Some useful thoughts that can be […]
Hi,
Just to answer your question “Does someone perceive testing to be difficult or time-consuming?” Yes, everyone, I can’t think of a single team member I have managed who doesn’t think that testing is time consuming, and they’d rather do something else.
Michael replies: Thank you for your reply. It was too important to leave buried in comments, so I responded here.
There are probably many problems behind these test result, some of which are clearly management’s fault for allowing such a situation to develop. It seems pretty clear that the system has no architecture to speak of with all of the tight coupling between different part of it. It is clear that they are not really doing Agile. Clearly, management and the developers believe that quality is the responsiblity of QA/testers, and that they have noresponsibility to deliver software to test with a certain minimal level of quality to allow testing to take place with any reasonable level of efficiency and effectiveness. The project needs to take a few day “timeout period” where they address these issue. They need a new process with new responsibilities levied on all parties, not just testers. They also probably need to spend time rearchitecting the current system because it does not seem to be maintainable and extensible in its current state. The list could go on and on, but you get the idea. I would say that if they (especially management) can’t get their act together, its time to consider a new employer. There is no reason why you can’t develop a system you are proud of, and life is too short killing yourself to develop crap.
[…] Testing Problems Are Test Results […]
[…] Testing Problems Are Test Results Written by: Michael Bolton […]
[…] Bolton, Michael (September 7, 2011): Blog: Testing Problems Are Test Results. http://www.developsense.com/blog/2011/09/testing-problems-are-test-results/ […]
Hello Michael,
Is not a problem for testers or testing, especially ones the ones you mention*, really a problem for the team, or project or program or product architecture, that the tester or testing is happening in? I mean isn’t is a product development problem if a tester is put in such a situation**
Michael replies: Yes it is. And I hoped that that would be apparent from this post.
If I am the responsible manager that sets up a team that puts a tester (or the testing) with unbalanced or unreasonable or unmanageable expectations then isn’t the problem a team result – or sprint or iteration result?
A test result the result of a test, an experiment. When we see a bug in a product, we call that a test result, not a product result. What I’m referring to here is the result of this experiment: an attempt to build a product, or to organize a team, or to observe an iteration over some period of time.
I think casting the problem that a tester (or testing) has as a product development (or team or program or management problem) would highlight the severity of the situation much better – the guys that control the money sometimes listen more when a problem is cast in terms of the whole product, program, etc.
Just my 2 cents.
/H
*
I’m a tester working alone with several programmers (or one of a handful of testers working with many programmers). —> A team problem? A team result?
I’m under enormous time pressure. Builds are coming in continuously, and we’re organized on one- or two-week development cycles. —> Unbalanced development effort? A team result?
The product(s) I’m testing is (are) very complex. —> More effort needed to help testing or the product architecture? A team or architectural result?
There are many interdependencies between modules within the product, or between products. —> An architectural problem? An architectural result?
I’m seeing a consistent pattern of failures specifically related to those interdependencies; the tiniest change here can have devastating impact there—or anywhere. —> An architectural / product problem? A product architecture result?
I believe that I have to run a complete regression test on every build to try to detect those failures. —> An architectural problem or unbalanced development effort? A product or team result?
I’m trying to cope by using automated checks, but the complexity makes the automation difficult, the program’s testing hooks are minimal at best, and frequent product changes make the whole relationship brittle. —> An architectural problem? An architectural result?
The maintenance effort for the test automation is significant, at a cost to other testing I’d like to do. —> An architectural result or team/product prioritisation result?
I’m feeling overwhelmed by all this, but I’m trying to cope. —> A team composition result?
**Might be due to product architecture, project set-up, team set-up, team or project philosophy on product development, (or combination or any of these), etc