DevelopsenseLogo

Alternatives to “Manual Testing”: Experiential, Interactive, Exploratory

This is an extension on a long Twitter thread from a while back that made its way to LinkedIn, but not to my blog.

Update, 2022/12/07: James Bach and I have recently changed “attended” to “interactive”, to emphasize the tester’s direct interaction with the product that happens during so-called “manual” testing.Interactive” is more evocative of what’s going on than “attended” is, but it’s also true the tester’s interaction might be limited to watching what’s going on in the moment. I’ve changed this post (and its title) to reflect that.

Testers who take testing seriously have a problem with getting people to understand testing work.

The problem is a special case of the insider/outsider problem that surrounds any aspect of human experience: most of the time, those on the outside of a social group—a community; a culture; a group of people with certain expertise; a country; a fan club—don’t understand the insider’s perspective. The insiders don’t understand the outsiders’ perspective either.

We don’t know what we don’t know. That should be obvious, of course, but when we don’t know something, we have no idea of how little we comprehend it, and our experience and our lack of experience can lead us astray. “Driving is easy! You just put the car in gear and off you go!” That probably works really well in whatever your current context happens to be. Now I invite you to get behind the wheel in Bangalore.

How does this relate to testing? Here’s how:

No one ever sits in front of a computer and accidentally compiles a working program, so people know—intuitively and correctly—that programming must be hard.

By contrast, almost anyone can sit in front of a computer and stumble over bugs, so people believe—intuitively and incorrectly—that testing must be easy!

In our world of software development, there is a kind of fantasy that if everyone is of good will, and if everyone tries really, really hard, then everything will turn out all right. If we believe that fantasy, we don’t need to look for deep, hidden, rare, subtle, intermittent, emergent problems; people’s virtue will magically make them impossible. That is, to put it mildly, a very optimistic approach to risk. It’s okay for products that don’t matter much. But if our products matter, it behooves us to look for problems. And to find deep problems intentionally, it helps a lot to have skilled testers.

Yet the role of the tester is not always welcome. The trouble is that to produce a novel, complex product, you need an enormous amount of optimism; a can-do attitude. But as my friend Fiona Charles once said to me—paraphrasing Tom DeMarco and Tim Lister—”in a can-do environment, risk management is criminalized.” I’d go further: in a can-do environment, even risk acknowledgement is criminalized.

In Waltzing With Bears, DeMarco and Lister say “The direct result of can-do is to put a damper on any kind of analysis that suggests ‘can’t-do’…When you put a structure of risk management in place, you authorize people to think negatively, at least part of the time. Companies that do this understand that negative thinking is the only way to avoid being blindsided by risk as the project proceeds.”

Risk denial plays out in a terrific documentary, General Magic, about a development shop of the same name. In the early 1990s(!!), General Magic was working on a device that — in terms of capability, design, and ambition — was virtually indistinguishable from the iPhone that was released about 15 years later.

The documentary is well worth watching. In one segment, Marc Porat, the project’s leader, talks in retrospect about why General Magic flamed out without ever getting anywhere near the launchpad. He says, “There was a fearlessness and a sense of correctness; no questioning of ‘Could I be wrong?’. None. … that’s what you need to break out of Earth’s gravity. You need an enormous amount of momentum … that comes from suppressing introspection about the possibility of failure.”

That line of thinking persists all over software development, to this day. As a craft, the software development business systematically resists thinking critically about problems and risk. Alas for testers, that’s the domain that we inhabit.

Developers have great skill, expertise, and tacit knowledge in linking the world of people and the world of machines. What they tend not to have—and almost everyone is like this, not just programmers—is an inclination to find problems. The developer is interested in making people’s troubles go away. Testers have the socially challenging job of finding and reporting on trouble wherever they look. Unlike anyone else on the project, testers focus on revealing problems that are unsolved, or problems introduced by our proposed solution. That’s a focus which the builders, by nature, tend to resist.

Resistance to thinking about problems plays out in many unhelpful and false ideas. Some people believe that the only kind of bug is a coding error. Some think that the only thing that matters is meeting the builders’ intentions for the product. Some are sure that we can find all the important problems in a product by writing mechanistic checks of the build. Those ideas reflect the natural biases of the builder—the optimist. Those ideas make it possible to imagine that testing can be automated.

The false and unhelpful idea that testing can be automated prompts the division of testing into “manual testing” and “automated testing”.

Listen: no other aspect of software development (or indeed of any human social, cognitive, intellectual, critical, analytical, or investigative work) is divided that way. There are no “manual programmers”. There is no “automated research”. Managers don’t manage projects manually, and there is no “automated management”. Doctors may use very powerful and sophisticated tools, but there are no “automated doctors”, nor are there “manual doctors”, and no doctor would accept for one minute being categorized that way.

Testing cannot be automated. Period. Certain tasks within and around testing can benefit a lot from tools, but having machinery punch virtual keys and compare product output to specificed output is not more “automated testing” than spell-checking is “automated editing”. Enough of all that, please.

It’s unhelpful to lump all non-mechanistic tasks in testing together under “manual testing”. Doing so is like referring to craft, social, cultural, aesthetic, chemical, nutritional, or economic aspects of cooking as “manual” cooking. No one who provides food with care and concern for human beings—or even for animals—would suggest that all that matters in cooking is the food processors and the microwave ovens and the blenders. Please.

Sharper Terms for “Manual Testing”

If you care about understanding the status of your product, you’ll probably care about testing it. You’ll want testing to find out if the product you’ve got is the product you want. If you care about that, you need to understand some important things about testing.

If you want to understand important things about testing, you’ll want to consider some things that commonly get swept a carpet with the words “manual testing” repeatedly printed on it. Considering those things might require naming some aspects of testing that you haven’t named before.

Experiential Testing

Think about experiential testing, in which the tester’s encounter with the product, and the actions that the tester performs, are indistinguishable from those of the contemplated user. After all, a product is not just its code, and not just virtual objects on a screen. A software product is the experience that we provide for people, as those people try to accomplish a task, fulfill a desire, enjoy a game, make money, converse with people, obtain a mortgage, learn new things, get out of prison…

Contrast experiential testing with instrumented testing. Instrumented testing is testing wherein some medium (some tool, technology, or mechanism) gets in between the tester and the naturalistic encounter with and experience of the product. Instrumentation alters, or accelerates, or reframes, or distorts; in some ways helpfully, in other ways less so. We must remain aware of the effects, both desirable and undesirable, that instrumention brings to our testing.

Interactive Testing

Are you saying “manual testing”? You might be referring to the interactive, attended or engaged aspects of testing. In that mode, the tester is directly and immediately observing and analyzing aspects of the product and its behaviour in the moment that the behaviour happens. The tester may be controlling the product, either directly through the user interface, or indirectly through some kind of tool.

Having noticed that there are times when you observe and interact with the product in the moment, you might want to contrast that with testing work that doesn’t require your immediate presence or control. Some tasks can be handled by the algorithmic, mechanistic things that machines can do unattended. Some of these are things that some people label “automated testing”—except that testing cannot be automated. To make something a test requires the design before the automated behaviour, and the interpretation afterwards. Those parts of the test, which depend upon human social competence to make a judgement, cannot be automated.

Exploratory Testing

Did you say “manual”? You might be referring to exploratory work, which is interestingly distinct from experiential work as described above. Exploratory—in the Rapid Software Testing namespace at least—refers to agency; who or what is in charge of making choices about the testing, from moment to moment. There’s much more to read about that.

Wait… how are experiential and exploratory testing not the same?

You could be exploring—making unscripted choices—in a way entirely unlike the user’s normal encounter with the product. You could be generating mounds of data and interacting with the product to stress it out; or you could be exploring while attempting to starve the product of resources. You could be performing an action and then analyzing the data produced by the product to find problems, at each moment remaining in charge of your choices, without control by a formal, procedural script.

That is, you could be exploring while encountering the product to investigate it. That’s a great thing, but it’s encountering the product like a tester, rather than like a user. It might be a really good idea to be aware of the differences between those two encounters, and take advantage of them, and not mix those up.

You could be doing experiential testing in a highly scripted, much-less-exploratory kind of way. For instance, you could be following a user-targeted tutorial and walking through each of its steps to observe inconsistencies between the tutorial and the product’s behaviour. To an outsider, your encounter would look pretty much like a user’s encounter; the outsider would see you interacting with the product in a naturalistic way, for the most part—except for the moments where you’re recording observations, bugs, issues, risks, and test ideas. But most observers outside of testing’s form of life won’t notice those moments.

Of course, there’s overlap between those two kinds of encounters. A key difference is that the tester, upon encountering a problem, will investigate and report it. A user is much less likely to do so. (Notice this phenomenon, while trying to enter a link from LinkedIn’s Articles editor; the “apply” button isn’t visible, and hides off the right-hand side of the popup. I found this while interacting with Linked experientially. I’d like to hope that I would have find that problem when testing intentionally, in an exploratory way, too.)

Transformative Testing

Are you saying “manual”? You might be referring to testing activity that’s transformative, wherein something about performing the test changes the tester in some sense. Transformative testing is about learning, fostering epiphanies, identifying risk, or triggering test design ideas. In early stages of testing, especially, it might be very important to relax or suspend our focus on finding problems, and pay more attention to building our mental models of the product. That paves the way for deeper testing later on.

Contrast that mode of testing with testing in a highly procedural way: transactional testing that amounts to rote, routine, box-checking. Most transactional things can be done mechanically. Machines aren’t really affected by what happens, and they don’t learn in any meaningful sense. Humans do, and we need to learn the product deeply if we want to find the deep bugs.

There are doubtless other dimensions of “manual testing”. For a while, we considered “speculative testing” as something that people might mean when they spoke of “manual testing”; “what if?” We contrasted that with “demonstrative” testing—but then we reckoned that demonstration is not really a test at all. Not intended to be, at least. For an action to be testing, we would hold that it must be mostly speculative by nature.

“Manual Testing” is… Testing

Here’s the main thing: if you’re a “manual tester”, you’re doing testing work all the time. If you’re an “automated tester”, or an “automation engineer”, you are also doing enormous amounts of “manual testing” all the time — unless you’re simply writing automated checks to some specification, running them, and completely ignoing the results.

Testers are fed bullshit on a regular basis. One serving of the bullshit comes on a plate that says that “automated” testing is somehow “better” than “manual” testing. But you can’t claim that you’re doing testing unless you are experiencing, exploring, and interacting with a product via experiments. (The “product” here might be running software, or some component of it. But it might also be a mockup, a design, a drawing, a description of an idea; those things can be tested by experiencing, exploring, and experimenting with them in your mind.) Tools can provide immense support for that kind of work, but to characterize testing as “automated” is to ignore everything that doesn’t involve the machinery.

“Manual testing”, goes the claim, is “slow and error prone”—as though people don’t make mistakes when they apply automation to checks. They do, and the automation enables those errors at a much larger and faster scale.

Sure, automated checks run quickly; they have low execution cost. But they can have enormous development cost; enormous maintenance cost; very high interpretation cost (figuring out what went wrong can take a lot of work); high transfer cost (explaining them to non-authors).

There’s another cost, related to these others. It’s very well hidden and not reckoned: we might call it interpretation cost or analysis cost. A sufficiently large suite of automated checks is impenetrable; it can’t be comprehended without very costly review. Do those checks that are always running green even do anything? Who knows… unless you pop into a “manual” process and evaluate what you’re seeing.

Checks that run red get frequent attention, but a lot of them are, you know, “flaky”; they should be running green when they’re actually running red. Of the thousands that are running green, how many should be actually running red? It’s cognitively costly to know that—so people routinely ignore it.

And all of these costs represent another hidden cost: opportunity cost; the cost of doing something such that it prevents us from doing other equally or more valuable things. That cost is immense, because it takes so much time and effort to automate behaviour in GUIs when we could be interacting with the damned product directly.

And something even weirder is going on: instead of teaching non-technical testers to code and get naturalistic experience with APIs, we put such testers in front of GUIish front-ends to APIs. So we have skilled coders trying to automate GUIs, and at the same time, we have non-programming testers, using Cypress to de-experientialize API use! The tester’s experience of an API through Cypress is enormously different from the programmer’s experience of trying use the API.

And none of these testers are encouraged to analyse the cost and value of the approaches they’re taking. Technochauvinism (great word; read Meredith Broussard’s book Artificial Unintelligence) enforces the illusion that testing software is a routine, factory-like, mechanistic task, just waiting to be programmed away. This is a falsehood. Testing can benefit from tools, but testing cannot be mechanized.

Testing has to be focused on finding problems that hurt people or make them unhappy. Why? Because optimists who are building a product tend to be unaware of problems, and those problems can lurk in the product. When the builders are aware of those problems, the builders can address them. Whereby they make themselves look good, make money, and help people have better lives.

Testing must be seen as a social (and socially challenging), cognitive, risk-focused, critical (in several senses), analytical, investigative, skilled, technical, exploratory, experiential, experimental, scientific, revelatory, honourable craft. Not “manual” or “automated”. Let us urge that misleading distinction to take a long vacation on a deserted island until it dies of neglect.

Thanks to Alexander Simic for his sharp-eyed and perceptive review of this post.

5 replies to “Alternatives to “Manual Testing”: Experiential, Interactive, Exploratory”

  1. This is an very interesting post, and it’s generated a lot of ideas for my next testing retrospective with my team, so thank you.

    Michael replies: You’re welcome.

    I was a bit surprised to read “[…] how are experiential and exploratory testing not the same?” and that exploratory testing and experiential testing simply “overlap”.

    As you’ve written before, all testing *is* exploratory testing. Testing means exploration.

    That’s true; testing is fundamentally exploratory. But as I’ve also written before, all testing is to some degree scripted by your overlal mission, your specific charter, your experiences, or your biases, and so forth. Testing can be experiential (such that the tester is performing naturalistic actions) but those actions might be strongly influenced or guided by explicit and specific test cases.

    Even the user conducting experiential testing is exploring the product. Whether it’s their first time encountering and thus wading through the swamp of unknown, or whether they are performing the same activity on the same machine that they’ve done every day for the last 10 years (thus exploring what happens upon doing the same thing ten million times in a row).

    Also true from one perspective; and yet doing the the same thing over and over again by rote is somewhat exploratory (from the exact perspective you cite), but not very exploratory at all from most other perspectives.

    I wonder, then, if experiential testing activities are just another subset of exploratory testing, and a more accurate differential may be ‘Conscious’ and ‘Unconscious’ testing.

    That’s not how we’d describe it, but you’re welcome to work out your own notions of this. “Conscious” and “unconscious” wouldn’t work very well for us, since if it’s actually testing, it had better be conscious.

    The overlap you mention can be framed as a state transition between these two types of testing.

    Any user has the capacity to spot a problem during unconscious testing, and transition into consciously testing that problem by investigating and reporting it.

    The difference is that a user may spend a short time investigating poorly, perhaps simply retrying what they did and seeing if it happens again, and reporting it to their friend or colleague who happens to be standing next to them. “Isn’t this weird?” they say, before the problem vanishes into the ether and is never seen or spoken of again.

    A serious and responsible tester would make a contextual judgement on how much investigation to perform and how best to report their findings.

    Bug investigation is more exploratory (the tester is in control of his or her actions) and almost certainly less experiential and more instrumented (the tester is behaving less like a regular user, and more likely using tools that mediate the experience).

    Very tasty food for thought, I’m looking forward to your next post. Thanks again.

    I could make the same statement about your reply. I’m glad you’re reflecting on this, and sharing your ideas. Thank you.

    Reply
  2. Very Helpful blog it is, you also make it amazing and an easy-to-read blog for the readers by adding proper information.It really helped me a lot in the field of Manual vs. Automation Testing

    Michael replies: I think you might want to look at the links below, and reflect on what’s on the page that you link to in your comment.

    http://www.satisfice.com/blog/archives/856
    https://www.developsense.com/blog/2017/11/the-end-of-manual-testing/
    https://www.developsense.com/blog/2021/08/alternatives-to-manual-testing-experiential-attended-exploratory/

    Reply

Leave a Comment