Breaking the Test Case Addiction (Part 1)

Recently, during a coaching session, a tester was wrestling with something that was a mystery to her. She asked:

Why do some tech leaders (for example, CTOs, development managers, test managers, and test leads) jump straight to test cases when they want to provide traceability, share testing efforts with stakeholders, and share feature knowledge with testers?

I’m not sure. I fear that most of the time, fixation on test cases is simply due to ignorance. Many people literally don’t know any other way to think about testing, and have never bothered to try. Alarmingly, that seems to apply not only to leaders, but to testers, too. Much of the business of testing seems to limp along on mythology, folklore, and inertia.

Testing, as we’ve pointed out (many times), is not test cases; testing is a performance. Testing, as we’ve pointed out, is the process of learning about a product through exploration and experimentation, which includes to some degree questioning, studying, modeling, observation, inference, etc. You don’t need test cases for that.

The obsession with procedurally scripted test cases is painful to see, because a mandate to follow a script removes agency, turning the tester into a robot instead of an investigator. Overly formalized procedures run a serious risk of over-focusing testing and testers alike. As James Bach has said, “testing shouldn’t be too focused… unless you want to miss lots of bugs.”

There may be specific conditions, elements of the product, notions of quality, interactions with other products, that we’d like to examine during a test, or that might change the outcome of a test. Keeping track of these could be very important. Is a procedurally scripted test case the only way to keep track? The only way to guide the testing? The best way? A good way, even?

Let’s look at alternatives for addressing the leaders’ desires (traceability, shared knowledge of testing effort, shared feature knowledge).

Traceability. It seems to me that the usual goal of traceability is be able to narrate and justify your testing by connecting test cases to requirements. From a positive perspective, it’s a good thing to make those connections to make sure that the tester isn’t wasting time on unimportant stuff.

On the other hand, testing isn’t only about confirming that the product is consistent with the requirements documents. Testing is about finding problems that matter to people. Among other things, that requires us to learn about things that the requirements documents get wrong or don’t discuss at all. If the requirements documents are incorrect or silent on a given point, “traceable” test cases won’t reveal problems reliably.

For that reason, we’ve proposed a more powerful alternative to traceability: test framing, which is the process of establishing and describing the logical connections between the outcome of the test at the bottom and the overarching mission of testing at the top.

Requirements documents and test cases may or may not appear in the chain of connections. That’s okay, as long as the tester is able to link the test with the testing mission explicitly. In a reasonable working environment, much of the time, the framing will be tacit. If you don’t believe that, pause for a moment and note how often test cases provide a set of instructions for the tester to follow, but don’t describe the motivation for the test, or the risk that informs it.

Some testers may not have sufficient skill to describe their test framing. If that’s so, giving test cases to those testers papers over that problem in an unhelpful and unsustainable way. A much better way to address the problem would, I believe, would be to train and supervise the testers to be powerful, independent, reliable agents, with freedom to design their work and responsibility to negotiate it and account for it.

Sharing efforts with stakeholders. One key responsibility for a tester is to describe the testing work. Again, using procedurally scripted test cases seems to be a peculiar and limited means for describing what a tester does. The most important things that testers do happen inside their heads: modeling the product, studying it, observing it, making conjectures about it, analyzing risk, designing experiments… A collection of test cases, and an assertion that someone has completed them, don’t represent the thinking part of testing very well.

A test case doesn’t tell people much about your modeling and evaluation of risk. A suite of test cases doesn’t either, and typical test cases certainly don’t do so efficiently. A conversation, a list, an outline, a mind map, or a report would tend to be more fitting ways of talking about your risk models, or the processes by which you developed them.

Perhaps the worst aspect of using test cases to describe effort is that tests—performances of testing activity—become reified, turned into things, widgets, testburgers. Effort becomes recast in terms of counting test cases, which leads to no end of mischief.

If you want people to know what you’ve done, record and report on what you’ve done. Tell the testing story, which is not only about the status of the product, but also about how you performed the work, and what made it more and less valuable; harder or easier; slower or faster.

Sharing feature knowledge with testers. There are lots of ways for testers to learn about the product, and almost all of them would foster learning better than procedurally scripted test cases. Giving a tester a script tends to focus the tester on following the script, rather than learning about the product, how people might value it, and how value might be threatened.

If you want a tester to learn about a product (or feature) quickly, provide the tester with something to examine or interact with, and give the tester a mission. Try putting the tester in front of

  • the product to be tested (if that’s available)
  • an old version of the product (while you’re waiting for a newer one)
  • a prototype of the product (if there is one)
  • a comparable or competitive product or feature (if there is one)
  • a specification to be analyzed (or compared with the product, if it’s available)
  • a requirements document to be studied
  • a standard to review
  • a user story to be expanded upon
  • a tutorial to walk through
  • a user manual to digest
  • a diagram to be interpreted
  • a product manager to be interviewed
  • another tester to pair with
  • a domain expert to outline a business process

Give the tester the mission to learn something based on one or more of these things. Require the tester to take notes, and then to provide some additional evidence of what he or she learned.

(What if none of the listed items is available? If none of that is available, is any development work going on at all? If so, what is guiding the developers? Hint: it won’t be development cases!)

Perhaps some people are concerned not that there’s too little information, but too much. A corresponding worry might be that the available information is inconsistent. When important information about the product is missing, or unclear, or inconsistent, that’s a test result with important information about the project. Bugs breed in those omissions or inconsistencies.

What could be used as evidence that the tester learned something? Supplemented by the tester’s notes, the tester could

  • have a conversation with a test lead or test manager
  • provide a report on the activities the tester performed, and what the tester learned (that is, a test report)
  • produce a description of the product or feature, bugs and all (see The Honest Manual Writer Heuristic)
  • offer proposed revisions, expansions, or refinements of any of the artifacts listed above
  • identify a list of problems about the product that the tester encountered
  • develop a list of ways in which testers might identify inconsistencies between the product and something desirable (that is, a list of useful oracles)
  • report on a list of problems that the tester had in fulfilling the information mission
  • in a mind map, outline a set of ideas about how the tester might learn more about the product (that is, a test strategy)
  • list out a set of ideas about potential problems in the product (that is, a risk list)
  • develop a set of ideas about where to look for problems in product (that is, a product coverage outline)

Then review the tester’s work. Provide feedback, coaching and mentoring. Offer praise where the tester has learned something well; course correction where the tester hasn’t. Testers will get a lot more from this interactive process than from following step-by-step instructions in a test case.

My coaching client had some more questions about test cases. We’ll get to those next time.

11 replies to “Breaking the Test Case Addiction (Part 1)”

  1. There is one case where I find writing a *Sample* of Test Cases useful, and that is as a method of more detailed review to feature requirement design – as when we think in details of what we will test, we may find issues already before the feature was implemented and thus save project time.

    Michael replies: that sounds like a good idea to me in principle, but do you need test cases for that? Would risk lists, examples of particular inputs, coverage outlines, descriptions of test conditions, etc., enable that just as easily or more efficiently?

    We often gain additional information as to required tools and setup to handle the test – allowing us to prepare these in advance.

    Again, that’s great. It seems to me that there are plenty more ways of describing those things other than test cases.

    So I often see the need in detailed design of Test Case sample per group of test cases / general area of tests in advance. It does not mean that TC will be followed as written when feature is eventually available, nor will it be the only tests that will be performed.

    OK; is there a medium—a form of representing you ideas—by which you could do that that’s less formal, more flexible, less expensive?

    @halperinko – Kobi Halperin

    Thanks for writing, Kobi.

  2. And finally someone said it :).

    In my last project, when we were building a test automation framework for our system, one of the things that we did was to analyze which part of the test tooling are useful and which are not.

    Apart from building our automation framework in SoapUI (with Groovy), we also created a traceability tool in Excel for test cases (with VBA).

    I remember starting with four sheets for traceability (Requirements – TC -execution Summary- Report tabs).

    • At end of first sprint, we realized there was no point mapping the report results from our scripts to this Excel (no added value) and the sheet with its associated macros were thrown away.
    • In second sprint, I realized, in our automation execution, we do hundreds of run each day, does it make sense to map them in excel, also no. Next sprint, that sheet and its associated macros were also thrown away (so we were left with two sheets now).
    • I realized the test cases were related to requirements, so no point for two sheets. I created a macro to parse the word documents into Excel and create some custom columns relevant for test cases and used this sheet to decide which tests I want to do and which to not. So now I was parsing requirements and writing test cases for these in one sheet.(so two sheets were merged to one).

    I was really feeling good about it at this time. After reading parsed documents in Excel, I have clear view on what is important for testing and what not. So this clearly had added value over reading those documents in Word.

    I thought we reached the optimum state. And then next sprint, I realized for my testing the test cases that I was writing in Excel for these requirements didn’t really have much added value. At the end I was creating SOAP requests for those requirements/test cases and they were in a stubbing tool, so adding them here meant maintenance.
    Did I want to do that? I was not sure. So I decided to wait and see how we work over a period of time and it adds any value. Over time, I realized no.

    So there went away that beautiful sheet and all its associated macros.

    At the end, I only used it to parse the word documents and for me to see which areas are covered and which are not (so a onetime use tool). That is what it was.

    Creating test cases, meant maintaining test cases. In a dynamic world, we wanted to focus more on coverage and execution and not maintaining words in Excel.
    All our “real” test cases were requests that were used in our APIs for testing. We didn’t needed another set of elaborate tool to write all that.

    At end, good testing is not just about more, it is also about finding and eliminating things that adds little value to testing and eliminating them.

    Michael replies: Indeed. There’s more than enough important work to do. Let’s eliminate stuff that isn’t important to anyone, and produce the least documentation that still completely satisfies the mission

    Thanks for the story.

  3. This came up on one of the Slack channels and it was suggested [by me —MB] I share my comment here as well.

    I share a lot of good articles like these across the company, this one for some reason I felt the need to add a little bit of context for the developers with the following. I’m sharing here just in case it’s of value to anyone else balancing slightly different angles from the developer and tester.

    “Developers and testers often quite rightly look at testing from very different perspectives. The different views on the need for test cases is an example of this.

    “Developer tests in my own personal experience often have a bias towards known risks that you can generally ‘assert’ and are really suited to be automated in a continuous integration approach. Here test cases make a lot of sense and add a lot of value.

    “Our testers, though, tend have more of a bias towards unknowns and the discovery and investigation of these sort of risks. Here the approach tends cover the ‘assertion’ elements as a natural byproduct of the investigation rather than as the focus of the investigation. Test cases here really do not serve a lot of value in my view and in fact can be damaging and wasteful to the testers’ mission.

    “When it comes to automation the testers do though switch hats to be developers and the thinking switches to known risks and here as mentioned test cases do make sense.”

    Michael replies: Thanks for the comment, Andy. I think it’s quite helpful to point out the difference between the developer/builder’s context and the tester’s context.

    (A test condition is something that can be examined in a test, or that could change the outcome of a test.) When there are specific conditions to be checked, and those conditions can be set up, observed, and evaluated mechanistically and algorithmically, then encoding the check might make a good deal of sense. Checking is especially feasible and practical at lower levels, closer to the code, where things tend to be highly. An unreliable component above can greatly reduce the reliability of the system above.

    Interestingly, though, the matching principle doesn’t hold: even when highly reliable components are integrated into a complex system (especially those that include direct human control), the system can fail in ways that surprise the builders. Finding these surprises requires exploration, experimentation, and experience with the system.

    Your comment reminded me to talk about test conditions; I do that a little in Part 5 of this series. So thanks again.


Leave a Comment