Excellent testers recognize that excellent testing is not merely a process of confirmation, verification, and validation. Excellent testing is a process of exploration,discovery, investigation, and learning.
A correspondent that I consider to be an excellent tester (let’s call him Al) works in an environment where he is obliged by his managers to execute overly structured, highly confirmatory scripted tests. Al wrote to me recently, saying that he now realizes why that’s frustrating for him: every time he runs through a scripted test, he gets five new ideas that he wants to act upon. I think that’s a wonderful thing, but when he acts on those ideas and fulfills his implicit mission (finding important problems in the product), it diverts him from his explicit mission (to complete some number of scripted tests per day), and he gets heat from his manager about that. At the end of a couple of days, the manager wants to know why Al is behind schedule—even if Al has revealed important problems along the way—because the manager is focused on test effort in terms of test cases completed, rather than test ideas explored.
I suggested to Al (as I suggest to you, if you’re in that kind of situation) a workaround: don’t act on the new test ideas; but do note them. Jot them down in handwritten notes or a text file, and especially note your motivation for them—ideas about risk, coverage, oracles, strategies, and the like. Tell your test manager or test lead that you didn’t run tests associated with those ideas, and then ask, “Are you okay with us NOT running them?”
In addition, check in with your manager more often than once every two days. Deliver a report, including new ideas, at one- to two-hour intervals. If direct personal contact isn’t available, try instant messages or email. If those don’t work, batch them, but note the time at which you started and/or stopped a burst of testing activity.
Al was excited about that. “Wow!” he said. “That also means defects arising from the new ideas are noted down. Currently, my management is under the impression that test cases are the things that reveal problems, but it’s my acting on my test ideas that really reveals the problems.” He also noted, “There’s another bad thing that comes from that. If the test cases don’t reveal problems, we take the problems that we’ve found and create a test case for them so that those problems aren’t missed next time.” I’ve seen that happen a lot, too. On the face of it, it doesn’t sound like a bad idea—except that specific problems that are fixed and verified tend to remain fixed. Repeating those tests is an opportunity cost against new tests that would reveal previously undiscovered problems.
So: the idea here is to make certain aspects of our work visible. Scripted test cases often reveal problems as those cases are developed. When those problems get fixed, the script loses much of its power. Thus variation on the script, rather than following the script rigourously, tends to reveal the actual problem. However, unless we’re clear that this is happening, managers will mistakenly give credit to the wrong thing—that is, the script—rather than to the mindset and the skill set of the tester.
I've seen similar situation. When I had identified new risk areas and test ideas for the test lead, I was told that the test matrix has been approved already and that there is a set of test cases that should be run before release. So, we were not going to change the test scope no matter what? I think one of the reasons was that earlier that day progress had been reported based on those test cases and management was worried because of lack of progress (on the explicit test cases).
I think progess is tightly connected to this dilemma. Also, how progess is reported today. I am sure management were actually interested in new risk areas and that they expect us to be pragmatic.
Solid Advice, Man. So much of the test literature is about the way the world should be (even the good stuff) – instead of "what we can do when it's not."
Good post, grounded, practical. Thank you.
I totally agree with your thoughts in the article and think there are far to few organisations that appreciate the skill set that a good tester brings to the table (instead of just being a "click-monkey").
There is however one more activity I would like to add to the process of testing:
"Excellent testing is a process of exploration,discovery, investigation, learning and communicating that information to the stakeholders."
The communication skills of a tester is something that's of vital essence for the tester to be able to be of the most benefit for the project.
Michael & Kristoffer Nordström: You good points are now mentioned on
http://www.eurostarconferences.com/blog/2010/9/6/close-the-tool-and-start-exploring—jesper-ottosen.aspx
[…] Michael Bolton: Handling an Overstructured Mission ] Scripted test cases often reveal problems as those cases are developed. When those problems get […]
Fell in the trap of total coverage –
http://jlottosen.wordpress.com/2012/11/05/fell-in-the-trap-of-total-coverage/
[…] [ Michael Bolton: Handling an Overstructured Mission ] […]