Can Exploratory Testing Be Automated?

In a comment on the previous post, Rahul asks,

One doubt which is lingering in my mind for quite sometime now, “Can exploratory testing be automated?”

There are (at least) two ways to interpret and answer that question. Let’s look first at answering the literal version of the question, by looking at Cem Kaner’s definition of exploratory testing:

Exploratory software testing is a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the value of her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project.

If we take this definition of exploratory testing, we see that it’s not a thing that a person does, so much as a way that a person does it. An exploratory approach emphasizes the individual tester, and his/her freedom and responsibility. The definition identifies design, interpretation, and learning as key elements of an exploratory approach. None of these are things that we associate with machines or automation, except in terms of automation as a medium in the McLuhan sense: an extension (or enablement, or enhancement, or acceleration, or intensification) of human capabilities. The machine to a great degree handles the execution part, but the work in getting the machine to do it is governed by exploratory—not scripted—work.

Which brings us to the second way of looking at the question: can an exploratory approach include automation? The answer there is absolutely Yes.

Some people might have a problem with the idea, because of a parsimonious view of what test automation is, or does. To some, test automation is “getting the machine to perform the test”. I call that checking. I prefer to think of test automation in terms of what we say in the Rapid Software Testing course: test automation is any use of tools to support testing.

If yes then up to what extent? While I do exploration (investigation) on a product, I do whatever comes to my mind by thinking in reverse direction as how this piece of functionality would break? I am not sure if my approach is correct but so far it’s been working for me.

That’s certainly one way of applying the idea. Note that when you think in a reverse direction, you’re not following a script. “Thinking backwards” isn’t an algorithm; it’s a heuristic approach that you apply and that you interact with. Yet there’s more to test automation than breaking. I like your use of “investigation”, which to me suggests that you can use automation in any way to assist learning something about the program.

I read somewhere on Shrini Kulkarni’s blog that automating exploratory testing is an oxymoron, is it so?

In the first sense of the question, Yes, it is an oxymoron. Machines can do checking, but they can’t do testing, because they’re missing the ability to evaluate. Here, I don’t mean “evaluation” in the sense of performing a calculation and setting a bit. I mean evaluation in the sense of making a determination about what people value; what they might choose or prefer.

In the second way of interpreting the question, automating exploratory testing is impossible—but using automation as part of an exploratory process is entirely possible. Moreover, it can be exceedingly powerful, about which more below.

I see a general perception among junior testers (even among ignorant seniors) that in exploratory testing, there are no scripts (read test cases) to follow but first version of the definition i.e. “simultaneous test design, test execution, and learning” talks about test design also, which I have been following by writing basic test cases, building my understanding and then observing the application’s behavior once it is done, I move back to update the test cases and this continues till stakeholders agree with state of the application.

Please guide if it is what you call exploratory testing or my understanding of exploratory testing needs modifications.

That is an exploratory process, isn’t it? Let’s use the rubric of Kaner’s definition: it’s a style of working; it emphasizes your freedom and responsibility; it’s focused on optimizing the quality of your work; it treats design, execution, interpretation, and learning in a mutually supportive way; and it continues throughout the project. Yet it seems that the focus of what you’re trying to get to is a set of checks. Automation-assisted exploration can be very good for that, but it can be good for so much more besides.

So, modification? No, probably not much, so it seems. Expansion, maybe. Let me give you an example.

A while ago, I developed a program to be used in our testing classes. I developed that program test-first, creating some examples of input that it should accept and process, and input that it should reject. That was an exploratory process, in that I designed, executed, and interpreted unit checks, and I learned. It was also an automated process, to the degree that the execution of the checks and the aggregating and reporting of results was handled by the test framework. I used the result of each test, each set of checks, to inform both my design of the next check and the design of the program. So let me state this clearly:

Test-driven development is an exploratory process.

The running of the checks is not an exploratory process; that’s entirely scripted. But the design of the checks, the interpretation of the checks, the learning derived from the checks, the looping back into more design or coding of either program code or test code, or of interactive tests that don’t rely on automation so much: that’s all exploratory stuff.

The program that I wrote is a kind of puzzle that requires class participants to test and reverse-engineer what the program does. That’s an exploratory process; there aren’t scripted approaches to reverse engineering something, because the first unexpected piece of information derails the script. In workshopping this program with colleagues, one in particular—James Lyndsay—got curious about something that he saw. Curiosity can’t be automated. He decided to generate some test values to refine what he had discovered in earlier exploration. Sapient decisions can’t be automated. He used Excel, which is a powerful test automation tool, when you use it to support testing. He invented a couple of formulas. Invention can’t be automated. The formulas allowed Excel to generate a great big table. The actual generation of the data can be automated. He took that data from Excel, and used the Windows clipboard to throw the data against the input mechanism of the puzzle. Sending the output of one program to the input of another can be automated. The puzzle, as I wrote it, generates a log file automatically. Output logging can be automated. James noticed the logs without me telling him about them. Noticing can’t be automated. Since the program had just put out 256 lines of output, James scanned it with his eyes, looking for patterns in the output. Looking for specific patterns and noticing them can’t be automated unless and until you know what to look for, BUT automation can help to reveal hitherto unnoticed patterns by changing the context of your observation. James decided that the output he was observing was very interesting. Deciding whether something is interesting can’t be automated. James could have filtered the output by grepping for other instance of that pattern. Searching for a pattern, using regular expressions, is something that can be automated. James instead decided that a visual scan was fast enough and valuable enough for the task at hand. Evaluation of cost and value, and making decisions about them, can’t be automated. He discovered the answer to the puzzle that I had expressed in the program… and he identified results that blew my mind—ways in which the program was interpreting data in a way that was entirely correct, but far beyond my model of what I thought the program did.

Learning can’t be automated. Yet there is no way that we would have learned this so quickly without automation. The automation didn’t do the exploration on its own; instead, automation super-charged our exploration. There were no automated checks in the testing that we did, so no automation in the record-and-playback sense, no automation in the expected/predicted result sense. Since then, I’ve done much more investigation of that seemingly simple puzzle, in which I’ve fed back what I’ve learned into more testing, using variations on James’ technique to explore the input and output space a lot more. And I’ve discovered that the program is far more complex than I could have imagined.

So: is that automating exploratory testing? I don’t think so. Is that using automation to assist an exploratory process? Absolutely.

For a more thorough treatment of exploratory approaches to automation, see

Investment Modeling as an Exemplar of Exploratory Test Automation (Cem Kaner)

Boost Your Testing Superpowers (James Bach)

Man and Machine: Combining the Power of the Human Mind with Automation Tools (Jonathan Kohl)

“Agile Automation” an Oxymoron? Resolved and Testing as a Creative Endeavor (Karen Wysopal)

…and those are just a few.

Thank you, Rahul, for the question.

15 replies to “Can Exploratory Testing Be Automated?”

  1. Exploratory Testing is a learn and work type of testing activity where a tester can at least learn more and understand the software if at all he/she was not able to reveal any potential bug. Exploratory testing, even though disliked by many, helps testers in learning new methods, test strategies, and also think out of the box and attain more and more creativity.

  2. Hi Michael,

    I really enjoyed your previous post on ET and I think you make a really important point about automation assisting the overall test effort. Specifically I think there is a lot of misunderstanding out there related to what automation can and cannot achieve.

    You made the point that automation can help to reveal hitherto unnoticed patterns in the data being analysed but you have to know what you are looking for to see the pattern emerging. Very often when I am examining data I find other patterns and in an instant this gives me other ideas. Those ideas have not come from the tool – they have come from my brain – but the tool helped bring those ideas to the forefront of my thinking.

    To me, the set up, programming and/or configuration of the tool is the test design activity: you are hypothesising. As soon as you click ‘Run’ you are checking that the hypothesis holds true.

    Thanks for keeping me thinking about the craft through your blog postings.


  3. I enjoyed the story at the end of this post – I think it really illustrates that automated testing has its uses, but that you must understand what it can and can’t do for you.

  4. Michael,

    Wherever you mentoined that something cannnot be automated – I read it like “cannot make a computer program to do what a skilled human tester does” as in you cannot make a computer program to get curious or frustrated or react spontaneously in one way or the other.

    I think this is key to any statement that ends with or has phrase “Cannot be automated”. When we refer automated ET as an oxymoron – we refer those aspects of ET that a computer program can not bring about.

    If we make a laundary list of things that a human tester does when testing – we can check for each task if it can be automated.

    For example:

    “modeling the test object/subject”

    Can this automated ? meaning can a computer program do it? Answer can be tricky…A compter program can create a model if a human tester feeds data to it. Certain program can claim to read from a file and create a model. but can it replicate thinking that goes behind making a model.

    I am thinking …


  5. Great post as always!

    Michael replies: Thank you, Arjan!

    On the other hand will we be able to Automate Exploratory Testing in the Future?

    If we were able to do so, would the automaton be doing what we call testing? This gets us into a philosophical realm. You might want to consider John Searle’s arguments, for example, that you wouldn’t be able to program in all of the background information and experience required to make a machine truly human. But that doesn’t matter much to me, because I don’t need a machine to be truly human. I’d prefer use machines to extend human capabilities, not to duplicate them. We’ve got plenty of true humans available.

    Deep Blue can beat a human at chess. Cool. (Note that this statement really means that a small army of programmers and computer engineers, plus some chess experts, all armed with an unspeakably powerful machine, can beat Garry Kasparov at chess in at least one tournament.) But does Deep Blue understand chess? It certainly knows how to deal with and manipulate the rules of chess, very powerfully and rapidly, but does Deep Blue understand the social purposes of chess? The psychological aspects of chess? The culture of chess? Does Deep Blue know that there is more to chess than the rules?

    (By the way, I have no hesitation in recommending Philosophy of Mind: Brains, Consciousness, and Thinking Machines, which is a wonderful and very accessible set of audio lectures from The Teaching Company. As usual with Teaching Company courses, it goes on sale from time to time; order it then.)

    For the same reason, I don’t see exploratory testing being automated until we have some radical advances in automation and a corresponding redefinition of what it is to be conscious and autonomous. Automatons aren’t autonomous, and autonomous things can’t be automated without sacrificing their autonomy. (Sorry; couldn’t resist that sentence.)

    I believe that what goes on in our head is just the result of neurons and electrons.

    I agree with that, in the same way that what goes on on Planet Earth is just the result of atoms and molecules. Note the word “just” in both of our sentences. It’s word magic to make the complexity vanish. But in exactly the same way a magician makes something vanish, it’s all still there in reality.

    You, Michael or the reader, may believe different. I guess that believe can also be part of your religion and let’s not go in that direction. But I believe that ultimately it will be possible to copy any process that goes on in our head, including Exploratory Testing.

    If you can copy it, is it exploratory?

    Will it be profitable? Probably not. It will not be easy and therefor be a huge investment in time and/or money. And if the testing can be automated, the programming probably as well. So there may not be a big need for it either. But we’re no way near that future and as I said, you may believe different.

    Why I’m writing this is that we can make tiny steps towards more Automation Assisted Testing.

    I disagree with you here too. We can make major steps towards that. 🙂

    E,g, by automating the tasks that we see now as administrative overhead. The fact that we perceive them as overhead can be an indication that it is easy to automate.

    At the same time, the fact that we perceive them as overhead can be an indication that they’re not worth doing at all. So we should keep that in mind as we’re considering automating them.

    And we should make sure that these tools are better working together. I try to do the latter by defining the interfaces between tools and providing libraries for those interfaces. I hope to lower the threshold for people to create a tool. To get more tools to automate the simple tasks, so we Testers can focus on the more intelligent tasks.

    More info, or if you want to contribute, contact me via Let me know what tasks you like to see automated. And if you believe different, tell me as well.

    (Sorry for the shameless plugging of my site)

    No problem with that. Unlike a spammer, you’re contributing to the conversation.

  6. Like all the best insights its obvious once someone points it out:

    “Test-driven development is an exploratory process.”

    Thanks, I love the worked example with James pointing out what can and cannot be automated and where tools play a part.


  7. Thank you for your detailed explanation about performance testing. Quality is absence of defects. Quality is conformance to specifications. Quality means no complaints from customers. Software Testing is very much necessary to enhance the performance of your software product.

    Michael replies: Thank you, but this is a little odd, since I didn’t mention performance testing. In addition, I would not agree that quality is the absence of defects, nor would I agree that quality is conformance to specifications. I prefer Jerry Weinberg’s definition, “Quality is value to some person.” Quality doesn’t mean no complaints from customers, either. For example, I’ve written a program that does nothing at all, and I haven’t sent it to anyone. I have received no complaints from customers, but that doesn’t make it a quality product. For another example, customers might try a product, find it buggy or useless, stop using it, and not complain.

    Testing is not necessary to enhance the performance of your software product; development is. Testing is a way of evaluating a product by learning about it through exploration and experimentation. You could do development without testing (although without testing, it would be pretty difficult to know for sure whether you had enhanced the performance of your software product). However, testing on its own doesn’t enhance anything except, perhaps, your understanding of the product.

  8. Thankyou for sharing this useful information on exploratory testing .Anybody wants to know more about this can go through this also

    “Exploratory Testing, A Guide Towards Better Test Coverage”.


Leave a Comment