Last time out, I was responding to a coaching client, a tester who was working in an organization fixated on test cases. Here, I’ll call her Frieda. She had some more questions about how to respond to her managers.
What if they want another tester to do your tests if you are not available?
“‘Your tests’, or ‘your testing’?”, I asked.
From what I’ve heard, your tests. I don’t agree with this but trying to see it from their point of view, said Frieda.
I wonder what would happen if we asked them “What happens when you want another manager to do your managing if you are not available?” Or “What happens when you want another programmer to do programming if the programmer is not available?” It seems to me that the last thing they would suggest would be a set of management cases, or programming cases. So why the fixation on test cases?
Fixation is excessive, obsessive focus on something to the exclusion of all else. Fixation on test cases displaces people’s attention from other important things: understanding of how the testing maps to the mission; whether the testers have sufficient skill to understand and perform the testing; the learning comes from testing and that feeds back into more testing; whether formalization is premature or even necessary…
A big problem, as I suggested last time, is a lack of managers’ awareness of alternatives to test cases. That lack of awareness feeds into a lack of imagination, and then loops back into a lack of awareness. What’s worse is that many testers suffer from the same problem, and therefore can’t help to break the loop. Why do managers keep asking for test cases? Because testers keep providing them. Why do testers keep providing them? Because managers keep asking for them, because testers keep providing them…, and the cycle continues.
That cycle also continues because there’s an attractive, even seductive, aspect to test cases: they can make testing appear legible. Legibility, as Venkatesh Rao puts it beautifully here, “quells the anxieties evoked by apparent chaos”.
Test cases help to make the messy, complex, volatile landscape of development and testing seem legible, readable, comprehensible, quantifiable. A test case either fails (problem!) or passes (no problem!). A test case makes the tester’s behaviours seem predictable and clear, so clear that the tester could even be replaced by a machine. At the beginning of the project, we develop 782 test cases. When we’ve completed 527 of them, the testing is 67.39% done!
Many people see testing as rote, step-by-step, repetitive, mechanical keypressing to demonstrate that the product can work. That gets emphasized by the domain we’re in: one that values the writing of programs. If you think keypressing is all there is to it, it makes a certain kind of sense to write programs for a human to follow so that you can control the testing.
Those programs become “your tests”. We would call those “your checks“—where checking is the mechanistic process of applying decision rules to observations of the software.
On the other hand, if you are willing to recognize and accept testing as a complex, cognitive investigation of products, problems, and risks, your testing is a performance. No one else can do just as you do it. No one can do again just what you’ve done before. You yourself will never do it the same way twice. If managers want people to do “your testing” when you’re not available, it might be more practical and powerful to think of it as “performing their investigation on something you’ve been investigating”.
Investigation is structured and can be guided, but good investigation can’t be scripted. That’s because in the course of a real investigation, you can’t be sure of what you’re going to find and how you’re going to respond to it. Checking can be algorithmic; the testing that surrounds and contains checking cannot.
Investigation can be influenced or guided by plenty of things that are alternatives to test cases:
- checklists
- coverage outlines
- data tables
- mind maps
- risk lists
- activity heuristics
- user stories and narratives
- existing test notes or session sheets
- source code for automated checks
- session charters
- scenario playbooks
- prior test reports
- bug reports from the field
Last time out, I mentioned almost all of these as things that testers could develop while learning about the product or feature. That’s not a coincidence. Testing happens in tangled loops and spirals of learning, analysis, exploration, experimentation, discovery, and investigation, all feeding back into each other. As testing proceeds, these artifacts and—more importantly—the learning they represent can be further developed, expanded, refined, overproduced, put aside, abandoned, recovered, revisited…
Testers can use artifacts of these kinds as evidence of testing that has been done, problems that have been found, and learning that has happened. Testers can include these artifacts in test reports, too.
But what if you’re in an environment where you have to produce test cases for auditors or regulators?
Good question. We’ll talk about that next time.
As you rightly said, Why do managers keep asking for test cases? Because testers keep providing them. Why do testers keep providing them? Because managers keep asking for them, because testers keep providing them…, and the cycle continues.
We were able to break this cycle,
That said, the trust didn’t came for free. We build this trust by continuously covering more areas by building and enhancing our automation framework and finding tons of defects, new and existing (some from the time of creation of system). And it was not just execution, we helped identify gaps in the requirements, helped our BAs with functional designs whenever needed and were able to suggest possible solutions to our developers for problems. All this was possible because we loved our jobs, and we got deep into our systems, not afraid of exploring all areas of it. Having a deep understanding and passion for our jobs was critical for us to be able to perform.
As you rightly said, “performance” is the key and I think for performance to happen, true passion towards our work is the key to deliver such performance.
Michael replies: The craft of software development seems to have a problem with learning from history. Even learning from last week, via review and retrospectives, seems all too rare.
Thanks (again!) for the story.
[…] Breaking the Test Case Addiction (Part 2) Written by: Michael Bolton […]
[…] Breaking the test case addiction Part 2 – Michael Bolton […]