The PDF file linked here is a transcript of a conversation over Skype, New Year’s Eve (December 31), 2010.
The conversation was prompted by a Twitter exchange on exploratory testing (ET) started by Andy Glover, who observed that “When developing scripts you need to explore. But this tends to be exploring with out the s/w so I would say it’s not ET.: I disagree; developing scripts is test design, and test design is certainly part of testing. Since the process of developing test scripts is an exploratory (unscripted) process, I would contend that script development is both exploratory and testing, and therefore exploratory testing. To get around Twitter’s limitations, I proposed an impromptu online chat. Anna Baik, Ajay Balamurugudas, Tony Bruce, Anne-Marie Charrett, Albert Gareev, Mohinder Kholsa, Michel Kraaij, and Erkan Yilmaz joined the conversation. Alas, Andy had other commitments and couldn’t be with us.
14 replies to “Exploratory Testing or Scripted Testing: Which Comes First?”
Thanks for organising the debate and writing up the transcript. I though I was missing out from not taking part in the skype conversation, but after reading the transcript I don’t feel so bad.
Michael replies: I’m glad that the discussion was so productive; thank you for initiating it. It could only have been improved by your presence. 🙂
I like the analogy of driving the school bus with your eyes closed!
Credit where credit is due department: that image comes to me from Nassim Nicholas Taleb.
There is still one point I’m not sure about. You explained ET in the transcript as design, execution, interpretation and learning which happen together. On occasions, when designing (and writing) test scripts, the software is not available to use/test, so there no execution, interpretation or learning on the software it self. So if ET is happening during test design for scripted tests, does this mean the tester is:
Exploring the available documentation and tries to learn from it, reveal new info from them?
Attending meetings and chatting with developers and users, asking questions and raising issues?
Yes to everything. 🙂 We used to say that exploratory testing was “simultaneous test design, test execution, and learning”. Cem Kaner’s updated definition of exploratory testing, apart from being more explicit, uses the phrase “in parallel throughout the project” to address issues with the word “simultaneous”. Yes, the activities are happening all the time. But yes, some turn into background threads, or are blocked or paused. The more they are blocked or paused, the more they’re separated by time or by person, and the more the development or management model imposes the separation, the more likely you are to be involved in a scripted process.
Some may believe that you need the software working in some kind of state in order to perform testing. (I used to think that myself.) I’d like to take a more expansive view: that the software is one of many artifacts that can be tested through a product’s development. Testing is “questioning a product in order to evaluate it” or “gathering information with the intention of informing a decision” or “an empirical, technical investigation, done on behalf of stakeholders, with the intention of revealing quality-related information of the kind that they seek” or “an investigation of code, systems, people, and the relationships between them.” By any of those definitions, reviews, or inspections, and walkthroughs are all testing activities. They are ways of “executing”—interacting with; questioning; probing—non-software artifacts like requirements, . These activities tend to be highly exploratory. You could say that static analysis tools are entirely scripted approaches to code review, but the interpretation of results and the learnings from such tools lean back to exploratory approaches.
BTW, I’m subscribed to your blog through Google Reader, but your posts are never listed on it. I only find out about the posts through twitter and links from other sites.
That sounds like a problem. Can someone help me figure out how to fix it?
Again, Andy, thanks for the inspiration and the comment.
So eventually – there was no definite answer to “Which Comes First?”
Michael replies: I’ll make it easier than that, even. If you take the perspective that neither scripted testing nor exploratory testing is an activity, but that “scripted” and “exploratory” are approaches to performing any activity, then the question as phrased isn’t a valid question, and therefore neither produces nor warrants a definite answer.
So that draws us back to Context Based Testing – which no one can claim as wrong (and get away with it 🙂 ), since it all depends on the context…
Context #1 – So if you have the time to write some test scripts before you get the software – Please do (scripted testing),
And as you get the software you keep on running scripted testing as well as exploratory, while exploratory testing is based on scripted testing, or based on additional charters you didn’t get to write during scripted test planning (or haven’t thought of at the time).
Context #2 – And if you get the software before writing any scripted tests, though luck, you are left with less time to prepare, you write short charters, test by them (mainly exploratory testing), and as usual improve along the way.
(If you can, you log some of the items worth repeating, for next phases.)
My personal preference would be, in general to dampen the degree to which testing is scripted. Scripted checks can be valuable. Some attributes of good checks might include:
While execution of checks is be done in a highly scripted way, practically by definition, it seems to me that the design, development, and result interpretation of the scripts that implement them would also be exploratory, done largely in parallel with development of the code and the product.
Bottom Line – What matters is to Keep on being Open minded, and Improve along the way.
Now – as a control freak, I wonder: How do I control this process best? What do I need from the ALM tools to support both contexts?
If you’re really a control freak, what you probably need is management training—or therapy, perhaps.
If you’re a manager, though, I’d start with some questions for you. What is it that you want to observe? What is it that you want to record? How and when will that suit your purposes? Who else is going to be a supplier or a consumer of information? What do they want to know? What are your deliverables to your clients? How do we foster and emphasize conversation while retaining corporate memory? To what degree do you want to the control the process? Would you be okay with steering it?
Later, I might ask “What assumptions steer you towards the belief that you’ll need application lifecycle management tools—tools specifically within that class?” Another question I might ask is “To what degree will the tool constrain or impact your team and your management choices?” But I’m hoping that once you’ve answered the earlier questions, you’ll be able to put less emphasis on these two.
Thanks for writing, Kobi.
That was fun to read 🙂
I was thinking about your response to Mohinder on Page 5, regarding not gathering information before going out for a walk. That seemed a bit odd to me – if I go out for a walk, I’m likely to at least glance out the window to see what the general weather conditions are, so I can dress somewhat appropriately. Wouldn’t that constitute “gatheirng information”?
Michael replies: Mohinder didn’t say “gathering information”. He said “get all the information available to you,” and I was responding to that. Mind you, I said “without gathering any information at all“, and as you point out that’s not right either. Speaking precisely can be a challenge! It’s something that testers need to practice (and I offer my own imprecision as evidence for that).
I’m not saying that detailed research is required, just some very basic observations about the environment before exploration begins.
I agree. Thanks for pointing it out.
if you have the time to write some test scripts before you get the SW – Please do (ST)
Actually, I thought exactly like that for a while.
* creating scripts is exploratory (investigation and analysis) activity
* instead of creating test scripts you may focus on figuring out and enlisting risks
* instead of creating test scripts based on requirements you may map (by using checklists or a mindmapping tool) claims
For automated build acceptance checks, – define rejectance criteria. (yes, sounds contradicting, but that’s what they are for).
some very basic observations about the environment before exploration begins
Observation might be part of either testing or checking. If it’s only checking, then this is how people fall down elevator shaft because they stepped in once the doors opened (check passed). I believe, though, you meant observation in an exploratory way.
Before I read the script, which could be extremely influential on my view, let me state what I think would be the real order.
If somebody wrote some code and I don’t have a spec for it, I would need to investigate (exploratory testing) before I could create scripts.
If somebody wrote code and the spec was incomplete or inaccurate, which is always, I would need to investigate (exploratory testing) before I could write a script.
If nobody wrote code yet and I wanted to write the script for it, I would talk to them, ask them what does it do (exploratory testing) before writing the script.
If I were going to write code and created a test before I wrote the code, I would write the test first. That is the only case that didn’t involved exploratory testing before scripting.
p.s. I will read it now.
Firstly a very happy new year to you.
Michael replies: Thank you; and the same to you.
Thank you for publishing the transcript of your Skype conversation; it was very interesting to read the views that came out.
For myself I consider ET and ST to be test approaches that can be applied to the various activities that take place during a project’s lifecycle and can thus be used to different degrees depending on context. Things I consider to be an influence here are the stage we are at in the project, how much we know about the software already, how long it will take to run a script (assuming one is in existence) vs. the length of time we have and what the risks are of running/not running the pre-defined script.
Try to remember cost vs. value, too. And test your choice of approach by framing it.
I like your list of things that a good set of checks might include in your answer to Kobi but can you explain a bit more about staying close to the code? Do you mean that when writing the checks we should be doing some form of static analysis on the code delivered by the programmers?
Not so much. By “staying close to the code”, I mean far more that checking is an activity that is strongly related to the development of the code itself. Test-driven development and other forms of checking at the unit level seem to have clear value to many programmers; it’s an essential part of their development process. Checking at higher levels seems to be a good deal more controversial (I think of Arlo Belshee, James Shore, and J.B. Rainsberger’s objections to elaborate, brittle, and slow integration checks, for example). I’ve used Fitnesse to reasonable effect for mid-level stuff; I’ve used Watir more as a tool to foster rapid navigation through a workflow, and less for checking (since, in the circumstance where I used it heavily, the Fitnesse stuff mostly did that). Your mileage will, of course, vary. Elisabeth Hendrickson and Adam Goucher are the two people I identify most strongly as proponents of higher-level automated checking done well.
Also: test automation is not the same thing to me as checking. I define a check here, and discuss ideas about checking at length in this series (alas, WordPress returns the search in reverse date order by default). As we define it in the Rapid Software Testing class, test automation is any use of tools to support testing. That means that test automation can be used in a scripted or an exploratory way. Cem Kaner discusses exploratory approaches to test automation here.
In addition, from 2008 Brian Marick presents some interesting ideas about the exploratory/scripted dimensions and the usefulness of scripting high-level functions here.
Thanks also for the reminder on the other resources on Transpection from James’ site in the Skype chat. I thought my ears were burning on New Year’s Eve! 😉
I throughly enjoyed reading the transcript, then the comments and responses here. Thanks so much to all who participated and to Michael for “cleaning it up” so it could be read so easiy. Good stuff. (Now to point the boss to this…)
I just explored an aspect of this site where using Firefox 4 beta 8 on Fedora 14, it sucks big time trying to enter name/mail/website details above, got there in the end though.
Could you do me a favour? Although I have no reason to believe that there’s a problem specific to this site, I have to consider the possibility. So could you describe “sucks big time” a little more clearly? What Bad Thing happens? What Good Thing fails to happen? Do you have this problem with, say, other WordPress sites while using the same configuration?
The term ET doesn’t gel for me, either you are exploring (I wonder what happens if…?), or you are testing (I do this, and expect to see that…)
Michael replies: To me, “I wonder what happens if…” is of the essence of testing. “I do this and expect to see that” isn’t testing, to me. That’s checking; confirmation, verification, validation, assuring conformance to expectations.
When I took the rapid testing course I asked James why it was so named and he told me that is sells better than calling it ‘Slow software testing’. Shows that even the best intentioned tester has to wonder about how the food gets on the table.
I’m thinking there’s a joke in there that you might have interpreted literally. Yes, “Rapid Software Testing” is likely to sell better than calling it “Slow Software Testing”, but I think there’s a chance that James was speaking ironically. Perhaps he’ll have more to say about this.
It seems we are back to ‘what does it all mean?’ I think the kicker here is that businesses know what they mean by success. It could be market share, turnover or profits but at least the business strategy is defined and clear to the decision makers in the business. The problem with software testing is that it is rarely linked to a fundamental business goal.
With respect to the clarity of the business strategy: which businesses are you referring to? What you’re describing hasn’t been my experience in general, and certainly not across the board. In many organizations, that clarity evolves with the business. Indeed, some of the most successful businesses that we know didn’t start with clarity about business goals; many arrived at their strategies and their measurements of success in a quite exploratory way. Apple and Google come to mind as examples.
With respect to software testing being “rarely linked to a business goal”, again, I’m not sure what you mean. Rarely linked to a business goal by whom? By managers? By testers who don’t exercise the skill of test framing? For example, I believe that I can link testing to a fundamental business goal: learning about the product that the organization is producing and the relevant risks. That learning informs decisions that help to prevent or mitigate problems that threaten the value of the product, the project, or the business.
Get that right and the balance between scripted tests and ET (which I’d rather call something like Business Intelligence Testing ‘BIT’, to get people more interested!) should become more clear to the guys with the hands on the purse strings!
I’m not convinced that you’ve read the conversation; or perhaps you’ve missed the salient part of my reply to Kobi, above. For your convenience, I’ll repeat: “If you take the perspective that neither scripted testing nor exploratory testing is an activity, but that ‘scripted’ and ‘exploratory’ are approaches to performing any activity, then the question (‘Which comes first?’) as phrased isn’t a valid question, and therefore neither produces nor warrants a definite answer.” That is, I think you’re continuing to take scripted testing (or checks) and exploratory testing as activities, when it might be more helpful to think of “scripted” and “exploratory” as approaches, as adjectives that modify “testing”. Or to take things from a different angle: is there any testing worth its salt that isn’t business intelligence testing?
[…] This post was mentioned on Twitter by Michael Bolton, Stephen Hill. Stephen Hill said: @michaelbolton Thank you for clarification in your reply to me on http://bit.ly/g2p2hS; that's helpful. […]
In reply to Mark Ferguson:
Mark, we call exploratory testing exploratory because that’s exactly what it is. Look up the definition of exploration, then the definition of testing, then put them together. That’s exploratory testing. Exploratory testing is literally an exploratory approach to testing. It harkens to an earlier thing called “Exploratory Data Analysis,” which was created by John Tukey. Cem was referring to EDA when he coined ET.
I’m sorry you don’t like the term, but it makes perfect sense to me and to enough other people that I think we’ll stick with it.
The question you asked me was not about ET but rather about “Rapid Testing.” I often give my half-serious answer (that it’s better than “slow testing”) when I’m asked that. My fully serious answer I probably also gave to you at the time. Perhaps you forgot. It is this: Rapid Testing is named that way because it is a methodology that focuses on testing rapidly. (I bet you could have seen that coming.)
If you took my class then you know that I take this craft very seriously. I think carefully about my terminology, and I hope you will to.
If all I wanted was to make money I could establish a certification program. I hope you notice I don’t play such games.
Michael: Thanks for catching my imprecision there too 🙂
Albert: In this context, the observation I was talking about was a state check. In Michael’s chat log, step 1 of the script says to restart the system; an observation one might want to make before executing that step is to check that the system is currently active.
Happy new year Michael!
And a great read!
[…] Exploratory Testing or Scripted Testing: Which Comes First? […]
[…] the Context-Driven Testing community, the testing craft is a living, growing thing. This dialog, led by my partner in Rapid Testing, Michael Bolton, is a prime example of the life among us. Read […]