In Rapid Software Testing, James Bach, our colleagues, and I advocate an approach that puts the skill set and the mindset of the individual tester—rather than some document or tool or test case or process modelY—at the centre of testing. We advocate an exploratory approach to testing so that we find not only the problems that people have anticipated, but also the problems they didn’t anticipate. We challenge the value of elaborately detailed test scripts that are expensive to create and maintain. We note that inappropriate formality can drive testers into an overly focused mode that undermines their ability to spot important problems in the product. We don’t talk about “automated testing”, because testing requires study, learning, critical thinking, and qualitative evaluation that cannot be automated. We talk instead about automated checking, and we also talk about using tools (especially inexpensive, lightweight tools) to extend our human capabilities.
We advocate stripping documentation back to the leanest possible form that still completely supports and fulfills the mission of testing. We advocate that people take measurement seriously by studying measurement theory and statistics, and by resisting or rejecting metrics that are based on invalid models. We’re here to help our development teams and their managers, not mislead them.
All this appeals to thinking, curious, engaged testers who are serious about helping our clients to identify, evaluate, and stamp out risk. But every now and then, someone objects. “Michael, I’d love to adopt all this stuff. Really I would. But my bosses would never let me apply it. We have to do stuff like counting test cases and defect escape ratios, because in the real world…” And then he doesn’t finish the sentence.
I have at least one explanation for why the sentence dangles: it’s because the speaker is going through cognitive dissonance. The speaker realizes that what he is referring to is not his sense of the real world, but a fantasy world that some people try to construct, often with the goal of avoiding the panic they feel when they confront complex, messy, unstable, human reality.
Maybe my friend shares my view of the world.
That’s what I’m hoping. My world is one in which I have to find important problems quickly for my clients, without wasting my client’s time or money. In my world, I have to develop an understanding of what I’m testing, and I have to do it quickly.
I’ve learned that specifications are rarely reliable, consistent, or up to date, and that the development of the product has often raced ahead of people’s capacity to document it. In my world, I learn most rapidly about products and tools by interacting with them.
I might learn something about the product by reading about it, but I learn more about the product by talking with people about it, asking lots of questions about it, sketching maps of it, trying to describe it. I learn even more about it by testing it, and by describing what I’ve found and how I’ve tested. I don’t structure my testing around test cases, since my focus is on discovering important problems in the product—problems that may not be reflected in scripted procedures that are expensive to prepare and maintain. At best, test cases and documents might help, maybe, but in my world, I find problems mostly because of what I think and what I do with the product.
In my world, it would be fantasy to think that a process model or a document—rather than the tester—is central to excellent testing (just as only in my fantasy world could a document—rather than the manager—be central to excellent management). In my world, people are at the centre of the things that people do.
In my version of a fantasy world, one could believe that conservative confirmatory testing is enough to show that the product fulfills its goals without significant problems for the end user. In my world, we must explore and investigate to discover unanticipated problems and risks.
If I wanted to do fake testing, I would foster the appearance of productivity by creating elaborate, highly-polished documentation that doesn’t help us to do work more effectively and more efficiently. But I don’t want to do that. In my world, doing excellent testing takes precedence over writing about how we intend—or pretend—to do testing.
In my version of a dystopic fantasy world, it would be okay to accept numbers without question or challenge, even if the number had little or no relationship to what supposedly was being measured. In my world, quantitative models allow us to see some things more clearly while concealing other things that might be important. So in my world, it’s a good idea to evaluate our numbers and our models—and our feelings about them—critically to reduce the chance that we’ll mislead ourselves and others.
I could fantasize about a world in which it would be obvious that numbers should drive decisions. In what looks like the real world to me, it’s safer to use numbers to help question our feelings and our reasoning.
Are you studying testing and building your skills? Are you learning about and using approaches that qualitative researchers use to construct and describe their work? If you’re using statistics to describe the product or the project, are you considering the validity of your constructs—and threats to validity?
Are you considering how you know what you know? Are you building and refining a story of the product, the work you’re doing, and the quality of that work? If you’re creating test tools, are you studying programming and using the approaches that expert programmers use? Are you considering the cost and value of your activities and the risks that might affect them? Are you looking only at the functional aspects of the product, or are you learning about how people actually use the product to get real work done?
Real-world people doing real-world jobs—research scientists, statisticians, journalists, philosophers, programmers, managers, subject matter experts—do these things. I believe I can learn to do them, and I’m betting you could learn them too. It’s a big job to learn all this stuff, but learning—for ourselves and for others—is the business that I think we testers are in, in my real world. Really.
Is your testing bringing people closer to what you would consider a real understanding of the real product—especially real problems and real risks in the product—so that they can make informed decisions about it? Or is it helping people to sustain ideas that you would consider fantasies?