Coding QA Podcast on Exploratory Testing

Several months back, James Bach did an interview with the CodingQA guys, Matthew Osborn and Federico Silva Armas. In the interview, James talks about the skills of exploratory testing, sex education (now do I have your attention?) and how to use session-based test management with minimal overhead and maximum credibility.

I’m surprised at how few people have heard about the podcast, so I’m drawing attention to it here. It runs around an hour. There’s a lot of content. Inspired by Adam Goucher, I’ve written up summary notes for those who would prefer to listen later. This first post is about exploratory testing generally. In a later post, I’ll summarize the discussion of session-based test management.

On Exploratory Testing Generally

  • Think of testing is a martial art. Seek to be a master; study the arts and weapons; share the passion.
  • It’s important to field-test our processes before we claim “this is the way things should be done”. Even if you have experience with your process and you think you’ve described it well, you’re not likely to be able to impart the process successfully to other people without revising and refining it, and without training them in it.
  • Exploratory testing is not just a fancy term for “fooling around with the computer”.
  • Exploratory testing is not a technique. It is an approach.
  • A technique is way of doing something. Approaches are broader notions; they’re like additives that are applied to techniques.
  • The opposite of exploratory testing is scripted testing, but…
  • Even though scripted and exploratory are opposites, they’re not mutually exclusive. “Hot” and “cold” are opposites, but we can mix hot and cold water together to get warm water. Similarly we can mix exploratory and scripted approaches together to get testing that is partially scripted and partially exploratory.
  • An exploratory approach can be applied to any test technique (or any other approach). For example, you can do boundary testing in a scripted way or in an exploratory way. Automation provides another approach from which you can test, so you can do scripted automated testing or exploratory automated testing.
  • Exploratory testing is three activities that are done in parallel in a mutually supporting way: learning, designing your tests, and executing your tests. You’re doing exploratory testing to the degee that those activities are not separated. The thing that distinguishes exploratory testing from scripted testing is the interaction between learning, design, and execution. As soon as you start to separate them, you’re starting to take a scripted approach, and the more you separate them, the more scripted your approach.
  • People think that exploratory testing means undocumented testing, but you can very documented with exploratory testing. It doesn’t mean unrigourous testing; you might be quite rigourous in your exploration. People think that exploratory testing means unstructured testing, but exploratory testing is structured, and it might be very explicitly structured. (The linked paper is an evolving list of the constituent skills of excellent (exploratory) testing.)
  • In exploratory testing, you always have loops. As soon as you put a loop into a scripted test, you’ve just gone exploratory. If you learn something in the course of a scripted test and you go back and investigate it, that has now become an exploratory test.
  • Exploratory testing is like sex: it went on before for a long time before people started talking about it and started to provide education about it. There would still be lots of sex going on even if we didn’t talk about it. The purpose of sex education is not the continuation of the human species; that’s going to happen anyway. We provide sex education because we want people to be able to make better, more informed choices about sex.
  • People do exploratory testing and don’t realize that they’re doing it, or don’t admit that they’re doing it, or pretend that they’re not doing it. If you’ve ever run into a problem with a script and done something about it rather than just sitting there, you’ve done exploratory testing; if you’ve ever investigated a bug, that’s exploratory testing; if you’ve ever worked with the product and learned about it just prior to writing a script, that’s exploratory testing.
  • What we’re talking about is learning to do exploratory testing like a pro.
  • Exploratory testing is like chess; to learn how to play it takes very little time. To learn how to play it well is much more significant proposition.
  • When people say that exploratory testing is like ad hoc testing, ask: “So what are the skills of ad hoc testing?” They won’t have an answer, because they’ve never thought about it.
  • Many testers can’t explain how they recognize a bug; it’s “sort of abstract”. But skilled exploratory testers who have studied the craft can describe how they recognize a bug, such that the listener himself can very quickly understand it, learn how to do it and explain it to others. When we’re specific about out patterns of observing and reporting problems, we don’t have to invoke unhelpful, vague, and personal terms like “intuition” or “magic”; we can actually explain how to do our work in a skillful way.
  • As an example, consider the the HICCUPPS(F) heuristic (History, Image, Comparable Products, Claims, User Expectations, Product, Purpose, and Standards). That set of consistency heuristics was discovered by observing and interviewing testers over time.
  • The consistency heuristics can be used in a generative way, to help find bugs; or they can be used in a retrospective way, to help frame the explanation after a bug has been found.
  • Unless your work is under scrutiny by someone skilled (i.e. a manager or a test lead), you won’t have the feedback necessary to become better at it and to sharpen it.

That covers the first twenty minutes of the conversation or so. The second part is summarized here. You can of course also listen to the podcast itself..

3 replies to “Coding QA Podcast on Exploratory Testing”

  1. Hello Michael,

    Thanks for the review. Even though it was a while ago, I still remember that interview as the time when the importance of observation and description of test behaviour really clicked for me.

    James was awesome, he stood around to teach us some test games that I still practice to this day.

    Now that I am a test lead in my org I’ve been trying to move my team at MS towards different test approaches. A lot comes from Bach, Kaner and yourself. We stumble, we get up, and we try it some more.

    Thank you.

  2. Great podcast, listened to it a couple weeks ago. Many points he makes are not crazy innovative such as “If you don’t care to learn your craft, you won’t be promoted and won’t advance within the organization.”

    This seems so rudimentary and basic, yet it is not at all a prevalent theme across the QA orgs with which I have worked.

    In my experience, some of the best testers we have had were fresh talent recently promoted up from tech support. Those guys knew the context of the product better than anybody else.

    Culturally, these guys were not respected because they were regarded as the “new guy”. They made the least amount of money but were, in my opinion, the most valuable members of the QA organization. They found the most high-value bugs, they interacted with their former peers in support and for their trouble, they were the lowest paid, and least respected members of the team.

    True story: A “Senior QA Engineer” at this org once said “He’s over there learning Perl, I have no interest in learning that crap.” The same guy working at a company that analyzes web log data once said in a meeting “Reading those web server logs is impossible. It’s like trying to learn an entire programming language and I’m not interested.”

    That guy eventually got laid off years later. But at the time was at the highest paid non-management level of the QA organization. Technically he was a lead Sr. QA engineer.

    I fully approve of making learning a requisite part of reviews. Having a line item that says “What did you learn in the last year?” and one that says “What do you plan to learn in the coming year to add value to the team?” in the review form.

    Anyway, great podcast.

    Adam Yuret
    Sr. QA Engineer.


Leave a Comment