After a presentation on exploratory approaches and on testing vs. checking yesterday, a correspondent and old friend writes:
Although the presentation made good arguments for exploratory testing, I am not sure a small QA department can spare the resources unless a majority of regression checking can be moved to automation. Particularly in situations with short QA cycles.
(Notice that he and I are using “testing” and “checking” in this specific way.)
Any time someone makes an observation about what is or isn’t possible, irrespective of the kind of testing (or checking) that they’re doing, it suggests some questions for the testing, programming, and management teams. I’d ask my old friend
1) How much checking do you need to do?
2) What, specifically, suggests that checking needs to be done? What happens when you do it? What doesn’t happen when you do it? What happens when you don’t do it? What doesn’t happen when you don’t do it?
3) What specifically, might suggest that the testers are the best people to do the checking? What, specifically, might suggest that they aren’t the best people to do it?
4) Where do your testers spend their time? When you speak with the people who are actually testing, do they feel the time that they’re spending on checking is worthwhile? Do they have things to say about what slows down testing (or checking)?
5) What are the risks that checking addresses well? What risks are not addressed well by checking?
These are open questions that all teams can ask, regardless of the approach they’re using now. Feel free to replace the word “checking” with “testing”, and vice versa, wherever you like.
I encourage and, when asked, help people to ask and answer these questions, and others like them. I have no specific answers from the outset; I don’t know you, and I don’t know your context. But you do. Maybe the questions can be helpful to you. I hope so.
See more on testing vs. checking.
Related: James Bach on Sapience and Blowing People’s Minds
In addition I would raise the topic who defines the amount of testing and the amount of checking necessary for the project at hand. Is it decided by the customer? by the project manager? by the technical leader? by the QA leader? by the CEO? by God? Consensus decision among a (sub-)set of these? You?
For 1) this could lead to a separation into "How much testing/checking do you need to do?" and "How much testing/checking do you think you need to do?"
Interesting response. I was having the same thought as your old friend. Irrespective of the amount of checking that I think I need to do, other team members see checking as being equal to testing, so once all the checking is done, this means that all the testing is done. Once again this may come back to a need to educate other team members, but I find it difficult to explain the value in exploratory testing as being equal to or greater than checking – it is always seen as a "nice to have". This is probably because it is more difficult to quantify (no nice green pie charts, etc).
I find that I end up doing exploratory testing while I am checking anyway, because I am easily distracted. And in some situations, it seems like the easiest way to "sneak in" some exploratory testing while still marking off the checklist.