This post is adapted from a recent blast of tweets. You may find answers to some of your questions in the links; as usual, questions and comments are welcome.
Update, 2017-01-07: In response to a couple of people asking, here’s how I’m thinking of “test case” for the purposes of this post: Test cases are formally structured, specific, proceduralized, explicit, documented, and largely confirmatory test ideas. And, often, excessively so. My concern here is directly proportional to the degree to which a given test case or a given test strategy emphasizes these things.
I had a fun chat with a client/colleague yesterday. He proposed—and I agreed—that test cases are like crutches. I added that the crutches are regularly foisted on people who weren’t limping to start with. It’s as though before the soccer game begins, we hand all the players a crutch. The crutches then hobble them.
We also agreed that test cases often lead to goal displacement. Instead of a thorough investigation of the product, the goal morphs into “finish the test cases!” Managers are inclined to ask “How’s the testing going?” But they usually don’t mean that. Instead, they almost certainly mean “How’s the product doing?” But, it seems to me, testers often interpret “How’s the testing going?” as “Are you done those test cases?”, which ramps up the goal displacement.
Of course, “How’s the testing going?” is an important part of the three-part testing story, especially if problems in the product or project are preventing us from learning more deeply about the product. But most of the time, that’s probably not the part of story we want to lead with. In my experience, both as a program manager and as a tester, managers want to know one thing above all:
Are there problems that threaten the on-time, successful completion of the project?
The most successful and respected testers—in my experience—are the ones that answer that question by actively investigating the product and telling the story of what they’ve found. The testers that overfocus on test cases distract themselves AND their teams and managers from that investigation, and from the problems investigation would reveal.
For a tester, there’s nothing wrong with checking quickly to see that the product can do something—but there’s not much right—or interesting—about it either. Checking seems to me to be a reasonably good thing to work into your programming practice; checks can be excellent alerts to unwanted low-level changes. But demonstration—showing that the product can work—is different from testing—investigating and experimenting to find out how it does (or doesn’t) work in a variety of circumstances and conditions.
Sometimes people object saying that they have to confirm that the product works and that they have don’t have time to investigate. To me, that’s getting things backwards. If you actively, vigorously look for problems and don’t find them, you’ll get that confirmation you crave, as a happy side effect.
No matter what, you must prepare yourself to realize this:
Nobody can be relied upon to anticipate all of the problems that can beset a non-trivial product.
We call it “development” for a reason. The product and everything around it, including the requirements and the test strategy, do not arrive fully-formed. We continuously refine what we know about the product, and how to test it, and what the requirements really are, and all of those things feed back into each other. Things are revealed to us as we go, not as a cascade of boxes on a process diagram, but more like a fractal.
The idea that we could know entirely what the requirements are before we’ve discussed and decided we’re done seems like total hubris to me. We humans have a poor track record in understanding and expressing exactly what we want. We’re no better at predicting the future. Deciding today what will make us happy ten months—or even days—from now combines both of those weaknesses and multiplies them.
For that reason, it seems to me that any hard or overly specific “Definition of Done” is antithetical to real agility. Let’s embrace unpredictability, learning, and change, and treat “Definition of Done” as a very unreliable heuristic. Better yet, consider a Definition of Not Done Yet: “we’re probably not done until at least These Things are done”. The “at least” part of DoNDY affords the possibility that we may recognize or discover important requirements along the way. And who knows?—we may at any time decide that we’re okay with dropping something from our DoNDY too. Maybe the only thing we can really depend upon is The Unsettling Rule.
Test cases—almost always prepared in advance of an actual test—are highly vulnerable to a constantly shifting landscape. They get old. And they pile up. There usually isn’t a lot of time to revisit them. But there’s typically little need to revisit many of them either. Many test cases lose relevance as the product changes or as it stabilizes.
Many people seem prone to say “We have to run a bunch of old test cases because we don’t know how changes to the code are affecting our product!” If you have lost your capacity to comprehend the product, why believe that you still comprehend those test cases? Why believe that they’re still relevant?
Therefore: just as you (appropriately) remain skeptical about the product, remain skeptical of your test ideas—especially test cases. Since requirements, products, and test ideas are subject to both gradual and explosive change, don’t overformalize or otherwise constrain your testing to stuff that you’ve already anticipated. You WILL learn as you go.
Instead of overfocusing on test cases and worrying about completing them, focus on risk. Ask “How might some person suffer loss, harm, annoyance, or diminished value?” Then learn about the product, the technologies, and the people around it. Map those things out. Don’t feel obliged to be overly or prematurely specific; recognize that your map won’t perfectly match the territory, and that that’s okay—and it might even be a Good Thing. Seek coverage of risks and interesting conditions. Design your test ideas and prepare to test in ways that allow you to embrace change and adapt to it. Explain what you’ve learned.
Do all that, and you’ll find yourself throwing away the crutches that you never needed anyway. You’ll provide a more valuable service to your client and to your team. You and your testing will remain relevant.
Happy New Year.
Further reading:
Testing By Percentages
Very Short Blog Posts (11): Passing Test Cases
A Test is a Performance
“Test Cases Are Not Testing: Toward a Culture of Test Performance” by James Bach & Aaron Hodder (in http://www.testingcircus.com/documents/TestingTrapeze-2014-February.pdf#page=31)
Michael, brilliant post! Thank you for giving testers another sharp tool for their testing toolbox.
I think when you say test cases, you mean the step-by-step instructions based on a requirement document. That is, test cases that are written in a confirmatory fashion.
Michael replies: That’s a good point, so I’ve added an update to be more specific. Thanks.
In Exploring Requirements Part-2, Jerry says that exploring the system from a stimulus-response point of view is a way of describing the black box (the requirements).
Hence, if people /have to/ use crutches for some reason, it might be a better idea to modify them as “what-if” heuristics instead of writing them as step-by-step instructions.
What is your opinion?
My opinion is a) let’s be careful of “have to” (just as Jerry suggests) and b) my degree of concern lessens when the rote procedure is de-emphasized, and the freedom and responsibility of the tester is emphasized. This includes the freedom to design the procedure and modify it; and the responsibility to perform the work skillfully, and to keep good professional records, where needed.
Cheers
Rajesh
Hi Michael,
Interesting article. You spoke a bit about this in the StarCanada session that I attend. I definitely have felt the pain of spending time creating test cases and putting them in all kinds of awful “test case management tools” from Excel to Access Databases to MTM – groan. And while creating these awesome test cases the amount of actual testing is negatively impacted. I like the idea of using some tools to create high level test ideas. However, aside from the awful test case management tools, one of my biggest concern is, lets just say a defect is released and someone comes back to me to say “Did you test this?” Of course by then I’ve moved on to who knows what. My test cases help me to be able to say “Yes I did” or “Oh wow, no I didn’t.” They also help when enhancements are made to be able to see what someone else may have tested.
Michael replies: Do your test cases do that? Or do your records do that? Are there other kinds of records that you could keep? More concise? More detailed?
While we’re at it, the motivation for the question “did you test this?” is potentially interesting. What are they really asking? What if the answer were “No, I didn’t.”? What if the answer were “Yes, I did.”? What would the response be to the different answers? And in any event, what do test cases have to do with the answer?
This article is definite food for thought. Thank you.
You’re welcome. Thanks for commenting.
Michael,
Thanks for this post. I have read other questions and post on “Death of the test cases” and I thought to share my views on this.
I agree & understand that relying on test cases is not worth, but at the same time hasn’t the time come to redefine test cases? If we can talk about reinventing testing these days then Why to discard such a commonly used term “Test Case” in software testing industry?
uuMichael replies: In physics, there used to be a substance called “phlogiston”. This was like an element, whose role was to foster fire. As scientists came to refine their understanding of combustion, theories based on phlogiston theory were replace by theories based on oxygen. In the past, in medicine, health was dictated by balances of the four humours. That got replaced by more sophisticated ideas that better described what was happening in the body. As we think more deeply about testing, maybe it’s time to displace the notion of test cases for something that better fits our evolving ways of thinking about testing.
Is it reasonable to think that the process of asking “Strategic Questions”, “What Ifs, should be linked to defining test cases instead of saying that there is no value in test cases?
If we want to refer to strategic questions, why not refer to strategic questions?
Isn’t it useful to educate testers to focus on questioning style of test cases instead of simple assertion based tests which reply on “Verify that…” stuff?
Yes; of course it is.
I had been gone through that old test case theories and as you said it may become crutch some day. But What If, we educate peers that “These “test cases” were questions that we asked with a certain “intent”and lest talk about these to give you a big picture of the system and testing context”. You may choose to take these Test Cases as baseline or kind of Test Ideas or even may discard and write your own questions?
As many people (including I) have pointed out, test cases often present a detailed procedure without declaring the motivation for following it. I would offer that in many cases, providing the motivation (which can typically be expressed much more concisely) is enough for a skilled tester to propose many motivations. A counter-argument to that would be “many testers are unskilled”. My counter-argument to that would be that providing the procedure is a great solution to the wrong problem. If the tester cannot develop a procedure on her own (and account for it), then she should be supervised directly and trained to do it.
I am not from agile projects (2-4 months life span). I worked on large projects (at least 2 years each where emphasis was on Test Cases. But I didn’t go in traditional approaches and educated team on understanding context, Testing real stuff and Lean & open ended documentation (may be in excel, text pad, word) such that other testers on team can get an insight.
Please advise what do you think about my thought process on this?
What you describe in your last paragraph sounds good to me.
[…] Blog: Drop the Crutches – Michael Bolton – http://www.developsense.com/blog/2017/01/drop-the-crutches/ […]
[…] Drop the Crutches Written by: Michael Bolton […]
Hello MB,
Thanks for yet another wonderful post on testing. I have genuine question on fundamental of “test cases”. I am Manual tester, now following market / industry trend and moving towards automations.
Michael replies: You are not a Manual tester, unless you are testing manuals. You use tools, right? You may not be programming automated checks, but that’s only one kind of tool use.
Automation test is set of scripts, integrated or independently setup in framework.
We think of that as automated checking.
I see / interact with lot of Automation tool users / experts, they emphasize importance of a having good robust frame work. And when I start enquiring about scripts, they have very little to share or describe.
I have similar experiences. Too often, it seems to me, there is a fascination with getting the machine to press its own buttons. Too infrequently, there is discussion of cost, value, and risk.
A test script is supposed to test a feature / function of given application under test, off course automatically?
I would say No. A test script is a procedure that enables people to check output from that feature. Checking is embedded inside testing. You could think of this like spell-checking (a process that can be done by humans and that is accelerated by some degree by algorithms), embedded inside of editing.
I am not sure, if calling a ‘Test Script’ = ‘Test Case’ would be right, but please allow me to do it for sake of this discussion.
If we are using high end tool available in market with best possible / robust frame work and have very less input / details on ‘Test Case’ tested, is it testing??
A test script is a procedure for turning a test case into activity. Tool use is part of testing. “Best possible” is context-dependent, depends on your purposes, and is subject to change overnight. You can defend yourself against arguments about “best possible” by saying “useful” instead. Checking, and using a tool to do it, is part of testing. It is not testing in the same way that compiling is not programming, even though compiling is part of programming. Checking your spelling, and using a tool to do it, is part of editing. It is not editing, even though checking your spelling is part of editing.”
If NO, why we say, importance of test cases is diminishing. Off course, as you mention, we need to find best possible ways to record test cases.
I’ve defined test cases above. Test cases are not the same thing as test ideas, or test procedures. Nor are any of these the same thing as testing. “The best possible way to do something” is neither necessary, nor can it be determined conclusively. Do things that are highly useful for your project, your product, and your team, by all means, but don’t worry about “best possible”.
Michael,
Thank you for another great thought provoking post. The thoughts about ‘Definition of Done’ as being an ‘…unreliable heuristic.’ doesn’t jive well with me from a agile team member perspective where stories are our metric on the amount of viable functionality that can be introduced to our customers.
The effort of checking & testing and the cyclic manner on how information from that is passed along to developers and product owners, cannot be defined with a definition of done — I agree with that notion.
But from a story perspective, each one is defined with an objective, some business value and list of acceptance criteria the product owner expects. If all of these are met from both the product owner and the tester’s perspective and their agreement on the understanding of the model — should we not be allowed to define a ‘definition of done’ where we should be able to virtually checkoff a story as completed?
Michael replies: You’re allowed to do anything you like; you don’t need my permission. But I’d recommend you pay attention to the fact that you seem compelled to have put the word “virtually” in there. I proposed “definition of not done yet” specifically to deal with that issue. We’re almost certainly NOT done unless some explicit objectives and criteria are met. But just because those criteria are met doesn’t mean certainly that we are done. That determination requires reflection and critical thinking to make sure that we’re not fooling ourselves.
Thanks Michael! Great analogy to crutches.
I fully agree that the test case (defined as environment steps expected results) does not help much. It puts you near, if not at, “scripted” end of exploratory-scripted continuum.
Michael replies: Yes. We now call that the formality continuum, by the way… and we’ve made it more clear that processes typically go from informal to formal, not the other way around.
I actually used to write them that way (knowing virtually nothing about software testing) and quickly noticed that it won’t work. So I switched to a different structure; I like Michael’s term “test idea” for these objects. They now describe WHAT is to be tested, instead of HOW it shall be tested.
“How it should be tested” isn’t a terrible thing on its own, although I agree “what is to be tested” is more important. Trouble lurks when “how it should be tested” turns into a specific, explicit, over-focused procedure, and becomes a demonstration rather than an experiment.
I noticed that these ideas, when written down, actually bring some value: doing so as soon as product definition is delivered, allow me to find blank spots in it. Obviously, some will only be found when actually using the product, but still – I am one step further.
It sounds to me like you’re talking about coverage of risks and corresponding test ideas; and that you’re keeping the door open to discovery and learning. That’s the ticket.
That said, I still believe that test cases will survive; I can think of heavily regulated environments, where exact steps must be followed to declare/assess compliance to something.
Is that really testing, though? Or is that demonstration?