A tester recently asked “If you’re asked to write a ‘test plan’ for a new feature before development starts, what type of thing do you produce?”
I answered that I would produce a reply: “I’d be happy to do that. What would you like to see in this test plan?”
The manager’s reply was, apparently, “test cases covering all edge cases we’ll need to test”.
That’s a pretty naïve request. Here’s my answer:
“Making sure the product handles edge cases properly is definitely an important task. If I were to take your request literally—test cases covering all edge cases we’d need to test—it could take a lot of time for me to prepare, and a long time for you to review and figure out all the things I might have left out.
“And there’s another issue: I don’t know in advance what all the edge cases are, or even what they might be—and neither do the developers, and neither do you. No one does. But that’s okay! We can start right now by learning about possible edge cases through testing. We can’t perform testing on a running product yet, obviously, but we can perform some thought experiments and test people’s ideas about the product.
“So how about I give you a short summary—a list or a mind map—of some of the broad risk areas we can start considering right away? We can share the list with the developers to help them anticipate problems, defend against them, and check their work. That will greatly reduce the need to test edge cases later, when the product has been built and the problems are harder to find.
“We can add to that risk list as we develop the product—and we can take things off it as we address those risks. That will help focus the testing work. When we start working with builds of the product, I’ll explore it with an eye to finding edge cases that we didn’t anticipate. And I’ll keep the quick summaries coming whenever you like. You can review those and give me feedback, so that we’re both on top of things all the way along.”
The software business, alas, still runs on folklore and mythodology about testing. Too few managers understand testing. Many managers—and alas, many testers—don’t realize that testing isn’t about test cases, but are nonetheless addicted to test cases. When we provide responsible answers to naïve questions, we can help to address that problem.
I’m presenting Rapid Software Testing Explored Online November 9-12, timed for North American days and European/UK evenings. You can find more information on the class, and you can register for it.
James Bach teaches in European daytimes December 8-11. Rapid Software Testing Managed is coming too. Find scheduling information for all of our classes.
I’d say, in reality this might develop in few more different directions.
Michael replies: I’d offer “realities”; there are lots of different realities, and lots of different perspectives on them.
Yes, it might be a naive comment on management’s side, yet we are not allowed to tell them that, it’s not polite.
Also, in some cultures it might be very dangerous to make such comments. Also, very often this Naïveté comes from testers, who are asked to produce documentation and all they can offer is test cases, because they don’t know of any better way to do it.
Typically telling someone that they’re naïive doesn’t go over very well. Telling them that isn’t the point here, though, so we can keep that part between ourselves here. As for the testers who don’t know of alternatives to test cases, well… that’s why we’re here, isn’t it? Here’s a series for people who seek alternatives.
I agree with you, in an ideal world, testing would be responsibility of the tester, while reporting will be encouraged based on valuable information for the management, yet this is hardly the case with companies. How would you deal with a micromanaging boss, that wants control over everything, just to make sure “you are doing it right”. Now, let’s add their bias, just like any other human being, that test cases bring them false sense of certainty, because they find them transparent and reachable. I guess it’s clear, we can’t simply tell them – “Go mind your management business and let me mind my testing business, man!”.
Those are problems that some people find hard, for sure. We offer a number of ways around them. One thing to avoid, for sure, is a statement like “Go mind your management business and let me mind my testing business, man!”
The tester role is built on five other mentalities. A tester is an agent; a craftsperson; a service provider; an analyst; a critic.
Agency is at the foundation. As an agent, I accept and require responsibility for my work and my decisions. If someone—like a micromanaging boss—tries to take agency from me, I take steps to recover it, or I step away from the role. That doesn’t mean that I flat-out refuse to be managed or to provide service (the service provider mentality requires that I do that). It does mean that I decline to have my mission and my tasks handed to me; it means that I negotiate my mission.
As a craftsman, I take responsibility for the practice of my craft and the quality of my work. If my client wants to know how I’m testing, that’s fine. If my client wants to discuss coverage, that’s fine. If my client wants specific conditions to be checked, in specific ways, that can be fine too. The key is that what the client wants and what I am prepared to provide must fit together. If a client wants hairdressing, that’s fine, but it’s not a service that I offer; the client should go elsewhere, and I’m not going to cut his hair. If a client wants me to use ultrasound to diagnose a problem with her iPhone, I’m not the right person for that client. I don’t have expertise in performing ultrasound scans, but what little expertise I have suggests that ultrasound wouldn’t be a productive approach. I’m not an ultrasound technician.
I am a tester, though. If a client wants a machine that generates formally scripted procedural test cases, that’s okay, but that’s someone else’s client, not mine. However, I might even produce some formally scripted test cases as part of an experiment; see below.
So, to me this is a few component problem:
1. Don’t wait for management to ask for output of your work, be the proactive side to deliver it, rather than wait for a request when you will have to battle bias, power games, etc.
2. Make your reports clear and transparent to management – they don’t give a damn about testing unless some of their managers comes with a problem to them, then they jump in “problem solving/micro managing mode” and you will have to face bias and power and decisions being made for yourself by another entity.
3. Often test cases are used as a last resort solution for testers that don’t know any other meaningful way to document their testing, Jumping in saying “ditch test cases and do whatever I do” will not work out of the box, it will require more effort and might be faced with serious resistance. So, developing the testing expertise and knowledge in us and our colleagues helps, but is a long term goal and achieved through lots of hard work.
4. Also, by this moment, I am not aware of an easy to digest way or hard empirical evidence to prove test cases don’t work to a person that doesn’t understand testing. If we want to persuade them to that, we have to switch to lecture mode for quite long periods of time, and hope very hard they are willing to listen. And I think managers don’t have the time to listen for too long.
I agree with these points, except for the lecture mode in Point 4. Lecture probably won’t help in any case, but there’s other ways to get the point across: do the work and analyze its cost and value; perform experiments to see what happens when things change; and negotiate.
It usually doesn’t take very long, or a very sophisticated or formal experiment, or very refined analysis to understand the time and effort associated with various activities. For testers who feel trapped in excessively formal test case, I suggest this: do a quick analysis for every bug that you report. Score each one based on the role the test case played in finding it, something like 5 for “the test case was absolutely essential”; 4 for “the test case was very helpful”; 3 for “the test cases wasn’t particularly helpful”; 2 for “the test case wasn’t at all helpful”, 1 for “the test case cost more time than it was worth”, and 0 for “the test case actively inhibited our ability to find this bug”.
Do a quick review of the test cases, too, scoring them based on their helpfulness in finding problems, and their on helpfulness in developing an understanding the product, and their cost, and their value.
Then, later, return to the bug reports and the test cases. Don’t use the scores to decide whether you’re doing a good job or not; use the scores to point you towards things that could be interesting. Review the bug reports and test cases more deeply, and look at the relationships between them. Make assesments; report your findings; and discuss them with your clients. It seems to me that that’s the responsible thing to do.
It’s a nice topic, I’d like to discuss it more.
[…] A Naïve Request from Management Written by: Michael Bolton […]