After a presentation on exploratory approaches and on testing vs. checking yesterday, a correspondent and old friend writes:
Although the presentation made good arguments for exploratory testing, I am not sure a small QA department can spare the resources unless a majority of regression checking can be moved to automation. Particularly in situations with short QA cycles.
(Notice that he and I are using “testing” and “checking” in this specific way.)
Any time someone makes an observation about what is or isn’t possible, irrespective of the kind of testing (or checking) that they’re doing, it suggests some questions for the testing, programming, and management teams. I’d ask my old friend
1) How much checking do you need to do?
2) What, specifically, suggests that checking needs to be done? What happens when you do it? What doesn’t happen when you do it? What happens when you don’t do it? What doesn’t happen when you don’t do it?
3) What specifically, might suggest that the testers are the best people to do the checking? What, specifically, might suggest that they aren’t the best people to do it?
4) Where do your testers spend their time? When you speak with the people who are actually testing, do they feel the time that they’re spending on checking is worthwhile? Do they have things to say about what slows down testing (or checking)?
5) What are the risks that checking addresses well? What risks are not addressed well by checking?
These are open questions that all teams can ask, regardless of the approach they’re using now. Feel free to replace the word “checking” with “testing”, and vice versa, wherever you like.
I encourage and, when asked, help people to ask and answer these questions, and others like them. I have no specific answers from the outset; I don’t know you, and I don’t know your context. But you do. Maybe the questions can be helpful to you. I hope so.