I used to speak at conferences. For the HUSTEF 2020 conference, I had intended to present a talk called “What’s Wrong with Manual Testing?” In the age of COVID, we’ve all had to turn into movie makers, so instead of delivering a speech, I delivered a video instead.
After I had proposed the talk, and it was accepted, I went through a lot of reflection on what the big deal really was. People have been talking about “manual testing” and “automated testing” for years. What’s the problem? What’s the point?
I mulled this over, and video contains some explanations of why I think it’s an important issue. I got some people — a talented musician, an important sociologist, a perceptive journalist and systems thinker, a respected editor and poet, and some testers — to help me out.
In the video, I offer some positive alternatives to “manual testing” that are much less ambiguous, more precise, and more descriptive of what people might be talking about:
- experiential testing (which we could contrast with “instrumented testing”;
- exploratory testing (which we have already contrasted with “scripted testing”;
- interactive testing (which we could contrast with “unattended testing”) (Update, 2022/12/5: James Bach and I called this “unattended” for a while, but that runs afoul of describing something in terms of what it isn’t, rather than in terms of what it is. We believe “interactive” is sharper here.)
There are some others. A more thorough discussion is available here.
In also the video, I also proposed how it came to be that important parts of testing — the rich, cognitive, intellectual social, process of evaluating a product by learning about it through experiencing, exploring and experimenting — came to be diminished and pushed aside by obsessive, compulsive fascination with automated checking.
But there’s a much bigger problem that I didn’t discuss in the video.
You see, a few days before I had to deliver the video, I was visiting an online testing forum. I read a question from a test manager who wanted to interview and qualify “manual testers”. I wanted provide a helpful reply, and as part of that, I asked him what he meant by “manual testing”. (Some people don’t like that question because they think I’m being fussy. What really makes me fussy is not being clear on what someone is talking about.)
His reply was that he was wanting to identify candidates who don’t use “automated testing” as part of their tool set, but who were to be given the job of “creating and executing manually scripted human-language tests and performing all the critical thinking skills that both approaches require”.
Let’s unpack that. I suspect “candidates who don’t use ‘automated testing’ as part of their tool set” means “testers who don’t write automated output checks“. I suspect that he wasn’t thinking precisely or explicitly in terms of experiential, exploratory, or attended testing; when people don’t have langauge to express those things, their ideas about “manual” tend to be vague and hand-wavey. His description suggests a notion of “manual testing” based on procedurally structured, formally scripted activities, with the scripts rendered in something other than program code.
Never mind the fact that testing can’t be automated. Never mind that creating a test script is not what testing is all about. Never mind that no one even considers the idea of scripting programmers, or management. Never mind all that. Wait for what comes next.
Then he said that “the position does not pay as much as the positions that primarily target automated test creation and execution, but it does require deeper engagement with product owners”. He went on to say that he didn’t want to get into the debate about “manual and automated testing”; he said that he didn’t like “holy wars”.
And there we have it, ladies and gentlemen; that’s the problem. Money talks. And here, the money—the fact that these testers are going to be paid less—is implicitly suggesting that talking to machines is more valuable, more important, than deeper engagement with people.
The money is further suggesting that skills stereotypically associated with men (who are over-represented in the ranks of programmers) are worth more than skills stereotypically associated with women (who are not only under-represented but also underpaid and also pushed out of the ranks of programmers by chauvinism and technochauvinism). (Notice, by the way, that I said “stereotypically” and not “justifiably”; there’s no justification available for this.)
Of course, money doesn’t really talk. It’s not the money that’s doing the talking. It’s our society, and people within it, who are saying these things. As so often happens, people are using money to say things they dare not speak out loud.
This isn’t a “holy war” about some abstract, obscure point of religious dogma. This is a class struggle that affects very real people and their very real salaries. It’s a struggle about what we value. It’s a humanist struggle. And the test manager’s statement shows that the struggle is very, very real.