People often make a distinction between “automated” and “exploratory” testing. This is like the distinction between “red” cars and “family” cars. That is, “red” (colour) and “family” (some notion of purpose) are in orthogonal categories. A car can be one colour or another irrespective of its purpose, and a car can be used for a particular purpose irrespective of its colour. Testing, whether exploratory or not, can make heavy or light use of tools. Testing, whether it entails the use of tools or not, can be highly scripted or highly exploratory.
“Exploratory” testing is not “manual” testing. “Manual” isn’t a useful word for describing software testing in any case. When you’re testing, it’s not the hands that do the testing, any more than when you’re riding a pedal bike it’s the feet that do the bike-riding. The brain does the testing; the hands, at best, provide one means of input and interaction with the thing we’re testing. And not even “manual” testing is manual in the sense of being tool- or machinery-free. You do you use a computer when you’re testing, don’t you?
(Well, mostly, but not always. If you’re reviewing requirements, specifications, code, or documentation, you might be looking at paper, but you’re still testing. A thought experiment or a conversation about a product is a kind of a test; you’re questioning something in order to evaluate it, pitting ideas against other ideas in an unscripted way. While you’re reviewing, are you using a pen to annotate the paper you’re reading? A notepad to record your observations? Sticky tabs to mark important places in the text? Then you’re using tools, low-tech as they might be.)
Some people think of test automation in terms of a robot that pounds on virtual keys more quickly, more reliably, and more deterministically than a human could. That’s certainly one potential notion of test automation, but it’s very limiting. That traditional view of test automation focuses on performing checks, but that’s not the only way in which automation can help testing.
In the Rapid Software Testing class, James Bach and I suggest a more expansive view of test automation: any use of (software- or hardware-based) tools to support testing. This helps keeps us open to the idea that machines can help us with almost any of the mimeomorphic, non-sapient aspects of testing, so that we can focus on and add power to the polimorphic, sapient aspects. Exploration is polimorphic activity, but it can include and be supported by mimeomorphic actions. Cem Kaner and Doug Hoffman take a similar tack: exploratory test automation is “computer-assisted testing that supports learning of new information about the quality of the software under test.” Learning new information is one of the hallmarks of exploratory testing, which usually points towards emphasizing variation rather than repetition.
That said, there can be a role for mechanized repetition, even when you’re using a highly exploratory approach: when repeating aspects of the test are intended to support discovery of something new or surprising. The key is not whether you’re mechanizing the activity. The key is what happens at the end of the activity. The less the results of one activity are permitted to inform the next, the more scripted the approach. If the repetition is part of a learning loop—a cycle of probing, discovering, investigating, and interpreting—that feeds back on itself immediately, then the approach is exploratory. James has also posted a number of motivations for repeating tests. Each one can (with the possible exception of “avoidance or indifference”) be entirely consistent with and supportive of exploration.
There are some actions that tools can perform better than humans, as long as the action doesn’t require human judgment or wisdom. Humanity can even get in the way of some desirable outcome. For example, when your exploration of some aspect of a product is based on statistical analysis, and randomization is part of the test design, it’s important to remember that people are downright lousy at generating randomized data. Even when people believe that they’re choosing numbers at random, there are underlying (and usually quite unconscious) patterns and biases that inform their choices. If you want random numbers, tools can help.
Tools can support exploration in plenty of other ways: data generation, system configuration; simulation; logging and video capture; probes that examine the internal state of the system; oracles that detect certain kinds of error conditions in a product or generate plausible results for comparison; visualization of data sets, key elements to observe, relationships, or timing; recording and reporting of test activity.
A few years back, I was doing testing of a teller workstation application at a bank (I’ve written about this in How to Reduce the Cost of Software Testing). The other testers, working on domestic transactions, were working from scripts that contained painfully detailed and explicit steps and observations. (Part of the pain came from the fact that the scripts were supplemented with screen shots, and the text and the images didn’t always agree.) My testing assignment involved foreign exchange, and the testing tasks I had been given were unscripted and, to a large degree, self-determined. In order to learn the application quickly, I had to explore, but this in no way meant that I didn’t use tools. On the contrary, in fact. In that context, Excel was the most readily available and powerful tool on hand. I used it (and its embedded Visual Basic for Applications) to:
- maintain and update (at a key stroke) enormous tables of currencies, rates, and transaction types
- access appropriate entries from the table via regular expression parsing
- model the business rules of the application under test
- display the intended flow of money through a transaction
- add visual emphasis to the salient outcomes of tests and test scenarios
- provide, using a comparable algorithm, clear results to which the product’s results could be compared
- help in performing extremely rapid evaluation of a test idea
- create tables of customer data so that I could perform a test using a variety of personas
- accelerate my understanding of the product and the test space
- enhance my learning about Boolean algebra and how it could be used in algorithms
- record my work and illustrate outcomes for my clients
- perform quick calculations when necessary
- help me find more actual problems than the other four testers combined
All of this activity happened in a highly exploratory way; each of the activities interacted with the others. I used very rapid cycles of looking at what I needed to learn next about the application, experimenting with and performing tests, programming, asking questions of subject matter experts and programmers and managers, reporting, reading reference documentation, debugging, and learning. Tight loops of activities happening in parallel are what characterize exploratory processes. Yet this was not tool-free work; tools were absolutely central to my exploration of the product, to my learning about it, and to the mission of finding bugs. Indeed, without the tools, I would have had much more limited ideas about what could be tested, and how it could be tested.
The explorers of old used tools: compasses and astrolabes, maps and charts, ropes and pulleys, ships and wagons. These days, software testers explore applications by using mind-mapping software and text editors; spreadsheets and calculators; data generation tools and search engines; scripting tools and automation frameworks. The concept that characterizes exploratory testing is not the input mechanism, which can be fingers on a keyboard, tables of data pumped into the program via API calls, bits delivered through the network, signals from a variable voltage controller. Exploratory testing is about the way you work, and the extent to which test design, test execution, and learning support and reinforce each other. Tools are often a critical part of that process.
Next in the series: What Exploratory Testing Is Not (Part 4): Quick Tests
And, of course, in the face of all these instances of what exploratory testing is not, you might want to know our current take on what exploratory testing is.
Greetings,
I really like para 6 (the one the begins “That said, there can be a role for mechanized repetition…”) because it really hits the nail on the head for me. I like framing from the perspective of “What do we do next?”.
I’m wondering though, in the past I’ve seen highly prescriptive test cases (manual or automated) created such that the state of the system after one test is the pre-requisite of the next. In that sense, the outcome of the first test (sort of?) informs the next. What do you think?
Michael replies: Try relaxing the idea of “the test” as a unit of measurement and the focus of your observation. Instead, and try thinking about the overall flow, and what happens from one moment to the next. The degree to which the tester—rather than scripts, circumstance, a manager or some other agency—is in charge of the process is the degree to which the process is exploratory.
Thanks for the post!
You’re welcome!
Vern
Thanks for you post. By the way I came across Mockito Mock Forum (http://www.mockito-mock.com) and found it is exclusively for Unit Testing, Automated testing and Testing frameworks. This wil be of great help to people around here!!
Michael replies: It might be, if it were still there. Maybe it’s here: https://groups.google.com/forum/#!forum/mockito
Thanks for your post. It is really good. If you can tell about Mockito and PowerMocito Testing frameworks in detail, that will really help. There is also a forum Mockito Mock (http://www.mockito-mock.com) (well, there used to be —MB) which is exclusive for Unit Testing frameworks. You can also check it out.
[…] What Exploratory Testing is Not (Part 3): Tool-Free Testing […]
[…] What Exploratory Testing is Not (Part 3): Tool-Free Testing […]
[…] elgondolásban m?ködni. Ez a különbség a felfedezés és az eszközök között. A felfedez? típusú tesztelés nem egy eszköztelen folyamat. Amikor a manuális tesztelésre gondolunk, akkor fennáll a veszélye, hogy alulterveztük az […]