To Go Deep, Start Shallow

Here are two questions that testers ask me pretty frequently:

How can I show management the value of testing?

How can I get more time to test?

Let’s start with the second question first. Do you feel overwhelmed by the product space you’ve been assigned to cover relative to the time you’ve been given? Are you concerned that you won’t have enough time to find problems that matter?

As testers, it’s our job to help to shine light on business risk. Some business risk is driven by problems that we’ve discovered in the product—problems that could lead to disappointed users, bad reviews, support costs… More business risk comes from deeper problems that we haven’t discovered yet, because our testing hasn’t covered the product sufficiently to reveal those problems.

All too often, managers allocate time and resources for testing based on limited, vague, and overly optimistic ideas about risk. So here’s one way to bring those risk ideas to light, and to make them more vivid.

  • Start by surveying the product and creating a product coverage outline that identifies what is there to be tested, where you’ve looked for problems so far, and where you could look more deeply for them. If you’ve already started testing, that’s okay; you can start your product coverage outline now.
  • As you go, develop a risk list based on bugs (known problems that threaten the value of the product), product risks (potential deeper, unknown problems in the product in areas that have not yet been covered by testing), and issues (problems that threaten the value of the testing work). Connect these to potential consequences for the business. Again, if you’re not already maintaining a risk list, you can start now.
  • And as you go, try performing some quick testing to find shallow bugs.

By “quick testing”, I mean performing fast, inexpensive tests that take little time to prepare and little effort to perform. As such, small bursts of quick testing can be done spontaneously, even when you’re in the middle of a more deliberative testing process. Fast, inexpensive testing of this nature often reveals shallow, easy-to-find bugs.

In general, in a quick test, we rapidly an encounter some aspect of the product, and then apply fast and easy oracles. Here are just a few examples of quick testing heuristics. I’ve given some of them deliberately goofy and informal names. Feel free to rename them, and to create your own list.

Blink. Load the same page in two browsers and switch quickly between them. Notice any significant differences?
Instant Stress. Overload a field with an overwhelming amount of data (consider PerlClip, BugMagnet or similar lightweight tools; or just use a text editor to create a huge string by copying and pasting); then try to save or complete the transaction. What happens?
Pathological Data. Provide data to a file that should trigger input filtering (reserved HTML characters, emojis…). Is the input handled appropriately?
Click Frenzy. Click in the same (or different) places rapidly and relentlessly. Any strange behaviours? Processing problems (especially at the back end)?
Screen Survey. Pause whatever you’re doing for a moment and look over the screen; see anything obviously inconsistent?
Flood the Field. Try filling each field to its limits. Is all the data visible? What were the actual limits? Is the team okay with them—or surprised to hear about them? What happens when you save the file or commit the transaction?
Empty Input. Leave “mandatory” fields empty. Is an error message triggered? Is the error message reasonable?
Ooops. Make a deliberate mistake, move on a couple of steps, and then try to correct it. Does the system allow you to correct your “mistake” appropriately, or does the mistake get baked in?
Pull Out the Rug. Start a process, and interrupt or undermine it somehow. Close the laptop lid; close the browser session; turn off wi-fi. If the process doesn’t complete, does the system recover gracefully?
Tug-of-War. Try grabbing two resources at the same time when one should be locked. Does a change in one instance affect the other?
Documentation Dip. quickly open the spec or user manual or API documentation. Are there inconsistencies between the artifact and the product?
One Shot Stop. Try an idempotent action—doing something twice that should effect a change the first time, but not subsequent times, like upgrading an account status to the top tier and then trying to upgrade it again. Did a change happen the second time?
Zoom-Zoom. Grow or shrink the browser window (remembering that some people don’t see too well, and others want to see more). Does anything disappear?

It might be tempting for some people to to dismiss shallow bugs. “That’s a edge case.” “No user will do that.” “That’s not the right way to use the product.” “The users should read the manual.” Sometimes those things might even be true. Dismissing shallow bugs too casually, without investigation, could be a mistake, though.

An image of a person panning for gold

Quick, shallow testing is like panning for gold: you probably won’t make much from the flakes and tiny nuggets on their own, but if you do some searching farther upstream, you might hit the mother lode. That is: shallow bugs should prompt at least some suspicion about the risk of deeper, more systemic problems and failure patterns about the product. In the coverage outline and risk list you’re developing, highlight areas where you’ve encountered those shallow bugs. Make these part of your ongoing testing story.

Now: you might think you don’t have time for quick testing, or to investigate those little problems that lead you to big problems. “Management wants me to finish running through all these test cases!” “Management wants to me to turn these test cases into automated checks!” “Management needs me to fix all these automated checks that got out of sync with the product when it changed!”

If those are your assignments from management, you may feel like your testing work is being micromanaged, but is it? Consider this: if managers were really scrutinizing your work carefully, there’s a good chance that they would be horrified at the time you’re spending on paperwork, or on fighting with your test tools, or on trying to teach a machine to recognise buttons on a screen, only to push them to repeatedly to demonstrate that something can work. And they’d probably be alarmed at how easily problems can get past these approaches, and they’d be surprised at the volume of bugs you’re finding without them—especially if you’re not reporting how you’re really finding the bugs.

Because managers are probably not observing you every minute of every day, you may have more opportunity for quick tests than you think, thanks to disposable time.

Disposable time, in the Rapid Software Testing namespace, is our term for time that you can afford to waste without getting into trouble; time when management isn’t actually watching what you’re doing; moments of activity that can be invested to return big rewards. Here’s a blog post on disposable time.

You almost certainly have some disposable time available to you, yet you might be leery about using it.

For instance, maybe you’re worried about getting into trouble for “not finishing the test cases”. It’s a good idea to cover the product with testing, of course, but structuring testing around “test cases” might be an unhelpful way to frame testing work, and “finishing the test cases” might be a kind of goal displacement, when the goal is finding bugs that matter.

Maybe your management is insisting that you create automated GUI checks, a policy arguably made worse by intractable “codeless” GUI automation tools that are riddled with limitations and bugs. This is not to say that automated checking is a bad thing. On the contrary; it’s a pretty reasonable idea for developers to to automate low-level output checks that give them fast feedback about undesired changes. It might also be a really good idea for testers to exercise the product using APIs or scriptable interfaces for testing. But why should testers be recapitulating developers’ lower-level checks while pointing machinery at the machine-unfriendly GUI? As my colleague James Bach says, “When it comes to technical debt, GUI automation is a vicious loan shark.”

If you feel compelled to focus on those assignments, consider taking a moment or two, every now and again, to perform a quick test like the ones above. Even if your testing is less constrained and you’re doing deliberative testing that you find valuable, it’s worthwhile to defocus on occasion and try a quick test. If you don’t find a bug, oh well. There’s a still good chance that you’ll have learned a little something about the product.

If you do find a bug and you only have a couple of free moments, at least note it quickly. If you have a little more time, try investigating it, or looking for a similar bug nearby. If you have larger chunks of disposable time, consider creating little tools that help you to probe the product; writing a quick script to generate interesting data; popping open a log file and scanning it briefly. All focus and no defocus makes Jack—or Jill—a dull tester.

Remember: almost always, the overarching goal of testing is to evaluate the product by learning about it, with a special focus on finding problems that matter to developers, managers, and customers. How do we get time to do that in the most efficient way we can? Quick, shallow tests can provide us with some hints on where to suspect risk. Once found, those problems themselves can help to provide evidence that more time for deep testing might be warranted.

Several years ago, I was listening while James Bach was teaching a testing workshop. “If I find enough important problems quickly enough,” he said, “the managers and developers will be all tied up in arguing about how to fix them before the ship date. They’ll be too busy to micromanage me; they’ll leave me alone.”

You can achieve substantial freedom to captain your own ship of testing work when you consistently bring home the gold to developers and managers. The gold, for testers, is awareness and evidence of problems that make managers say “Ugh… but thank heavens that the tester found that problem before we shipped.”

If you’re using a small fraction of your time to find problems and explore more valuable approaches to finding them, no one will notice on those rare occasions when you’re not successful. But if you are successful, by definition you’ll be accomplishing something valuable or impressive. Discovering shallow bugs, treating them as clues that point us towards deeper problems, finding those, and then reporting responsibly can show how productive spontaneous bursts of experimentation can be. The risks you expose can earn you more time and freedom to to deeper, more valuable testing.

Which brings us to back to the first question, way above: “How can I show management the value of testing?”

Even a highly disciplined and well-coordinated development effort will result in some bugs. If you’re finding bugs that matter—hidden, rare, elusive, emergent, surprising, important, bone-chilling problems that have got past the disciplined review and testing that you, the designers and the developers have done already—then you won’t need to do much convincing. Your bug reports and risk lists will do the convincing for you. Rapid study of the product space builds your mental models and points to areas for deeper examination. Quick, cheap little experiments help you to learn the product, and to find problems point to deeper problems. Finding those subtle, startling, deep problems starts with shallow testing that gets deeper over time.

2 replies to “To Go Deep, Start Shallow”

  1. 17 years ago it occurred to a wise tester that when test plans, test scripts, and testers look for particular problems with excessive focus, they do so at the expense of peripheral vision.

    And with this post he has refined that idea once more:

    I guess the lesson here is that it is always worth it to revisit, question and reify promising ideas even from decades ago.

  2. I wholeheartedly support the idea of finding problems quickly to give management something to worry about. For long time, this has been my answer to the problem of not having enough time to test: find a serious bug, make the manager renegotiate the shipping date, and suddenly – whoosh! – you have plenty of time to find even more.


Leave a Comment