DevelopsenseLogo

Four (and More) Questions for Testers to Ask

Testers investigate problems and risk. Other people manage the project, design the product, and write the code. As testers, we participate in that process, but in a special way and from a special perspective: it’s our primary job to anticipate, seek, and discover problems in products.

It’s probably a good idea to clear up some possible ambiguity here. When I’m talking about a product, I’m talking about anything that some person or group has produced. That can be a running, full-blown product or service that we make available to customers; let’s call that the Product, with a big P. Developing a Product requires us to develop other little-p products along the way: components, features, stories, designs, requirement documents, prototypes, flowcharts, artwork, specifications, wireframes, files of CSS, plans, ideas… you name it. People are producing these things; these things are products.

We testers don’t have the capacity to prevent problems in Products or products, except in our own products. We don’t design, build, fix, or manage the products that other people are developing. We’re evaluating things that other people are bringing to us. We can’t prevent problems in those things.

(“But Michael! Can’t testers fix code?! Can’t testers make suggestions about the design?!” Yes, yes, yes. Some testers, sometimes, may dive in and fix a line or two of code. Some testers provide a design suggestion that gets taken up by the designer. Some testers spend half of their time working on the build process, rather than obtaining experience with the product. When testers do anything that isn’t testing work, they have temporarily abandoned the testing role, and have entered the developer or designer role. They have moved, for the moment, from the critic role to the maker role. More on roles here.

There’s nothing intrinsically wrong with a tester doing any other kind of work. There’s also no rule against a goalie running downfield and attempting to score a goal. It’s important for the team to notice that, in that moment, the goalie has abandoned the goal; that anyone attempting to defend the goal can’t use hands on the ball; that the the goalie probably doesn’t have the goal-scoring skills of the forwards; and, just as probably, no one else on the field has the level of goaltending skill that the goalie has. Things might be okay… or can go badly wrong.)

Whatever product we’re presented with probably comes with problems. As testers, it’s our special stance to have faith that there are problems to be found. We haven’t prevented those problems. They exist already. The trick is to help the people building the product to recognize them.

That is: there is one problem that we can definitely help to prevent, and that’s ongoing oblivion to problems. We may help to prevent existing problems from going any farther, by discovering bugs, misunderstandings, issues, and risks, and bringing them to light. With our help, the people who build and manage the project can address the problems we have revealed, and prevent worse problems down the line. We don’t “put the quality in”. Let’s give due credit to the people who do.


Over the last while, I’ve been working with clients that are “shifting left”, “going Agile”, “doing DevOps”, or “getting testers involved early”. Typically this takes the form of having a tester present for design discussions, planning meetings, grooming sessions, and the like.

This is usually a pretty good idea; if there is no one in the testing role, people tend not to think very deeply about testing—or about problems or risk. That’s why, even if you don’t have someone called “tester” on the team, it’s an awfully good idea to have someone in the testing role and the testing mindset. Here, I’ll call that person “tester”.

Alas, I’ve sometimes observed that, once invited to the meetings, testers are sometimes uncertain about what they’re doing there.

A while back, I proposed at least four things for testers to do in planning meetings: learning; advocating for testability; challenging what we’re hearing; and establishing our roles as testers. These activities help to enable sensemaking and critical thinking about the product and the project. How can testers do these things successfully? Here’s a set of targeted questions.

What are we building? Pretty much everyone around the table is considering that. Part of the tester role is to help our clients come to a clearer understanding of the product that we’re being asked to test. Remember, I could be referring here to any kind of product in the list way above. We could be talking about the Product itself or another product that is a component of it, or that represents or describes it. We could be looking at a diagram of it; reviewing a document or description; evaluating a workflow; playing with a prototype; discussing an idea.

When we’re reviewing a Product, asking for any the products I’ve just mentioned can help us, as testers, if we don’t have them already. A beneficial side effect is helping to refine everyone’s understanding of the Product—and how we’d achieve successful completion of the project or task.

So we might also ask: What will be there when we’ve built it? What are the bits and pieces? (Can we see a diagram?) What are the functions that the product offers; what should the product do? What gets input, processed, and output? (Do we have a data dictionary?) What does the product depend upon? What depends on the product? (Has someone prepared a list of dependencies? A list of what’s supported and what isn’t?)

Who are we building it for? Whatever kind of product we’re building, we’re ultimately building it for people to use. Sometimes we make the mistake of over-focusing on a particular kind of user: the person who is immediately encountering the product, with eyes on screen and fingers on keyboard, mouse, or glass. Often, however, that person is an agent for someone else—for a bank teller’s application, think of the bank teller, but also think of the customer on the other side of the counter; the bank’s foreign exchange traders; the bank teller’s manager. Beyond using the product, there are other stakeholders: those who support it, connect to its APIs, test it, document it, profit from it, or defend it in court.

So we might also ask: Who else is affected by this product? Who do they work for, or with? What matters to them? (These questions are targeted towards operations and value-related testability.) Who will support the product? Maintain it? Test it? Document it? We could also ask who are we not building it for — who are we forgetting? Are we forgetting novice users, or experts, or people with disabilities, or people from other countries? And who are our disfavoured users, like hackers, or data thieves, or ransomware artists? Who do we want to avoid building it for?

What do they want from it? Pretty much everyone around the table has a general sense of this too. The key is to get deeper. It’s easy to focus on capability, or functionality, and forget the other things that people care about: reliability, usability, charisma, security, scalability, compatibility, perfomance, installability. When we remember the stakeholders at the table around us or in our (sometimes virtual) office building, we might also consider quality criteria that matter to the business and the development group: supportability, testability, maintainability, portability, and localizability.

So we might also ask: What else do people want from this product? And what do they not want from it?

What could go wrong? The most important questions for testers to raise are questions about problems and risks. Developers, designers, business people, or others might discuss features or functions, but people who are focused on building a product are not always focused on how things could go badly. Switching from a builder’s mindset to a tester’s mindset is difficult for builders. For testers, it’s our job.

So we might also ask: What Bad Things could happen? What Good Things could fail to happen? Under what conditions might they happen or not happen? What might be missing? What might be there when it shouldn’t be there? What quality criteria might be threatened?

When something goes wrong, how would we know? Once again, this is a question about testability, and also a question about oracles. As James Bach has said, “software testing is the infinite art of comparing the invisible to the ambiguous to prevent the unthinkable from happening to the anonymous”. For any non-trivial program, there’s a huge test space to cover, and bugs and failures don’t always announce themselves. Part of our job is to think of the unthinkable and to help those invisible things to become visible so that we can find problems—ideally in the lab before we ship. Some problems might escape the lab (or our continuous deployment checks, if we’re doing that).

So we might also ask: How might we miss something going wrong? What do we need for intrinsic testability—at the very least, log files, scriptable interfaces, and code that has been reviewed, tested, and fixed as it’s being built. And what about subjective testability? Do we have the domain knowledge to recognize problems? What help might we need to obtain that? Do we have the specialist skills—in (for example) security, performance, or tooling—on the team? Do we need help there? If we’re working in a DevOps context, doing live site testing or testing in production, how would we detect problems rapidly?

In sprint planning meetings, or design discussions, or feature grooming sessions, questions like these are important. Questions focused on problems don’t come naturally to many people, but asking such questions should be routine for testers. While everyone else is envisioning success, it’s our job to make sure that we’re anticipating failure. When everyone else is focused on how to build the product, it’s important for us to keep an eye on how the entire team can study and test it. When everyone else is creatively optimistic, it’s important for us to be pragmatically pessimistic.

None of the activities in planning and review replace testing of the product that is being built. But when we participate in raising problems and risks early on, we can help the team to prevent those problems—including problems that make testing harder or slower, allowing more bugs to survive undetected. Critical thinking now helps to enable faster and easier testing and development later.

Now a word from our sponsor: I help testers, developers, managers, and teams through consulting and training in Rapid Software Testing (RST). RST is a skill set and a mindset of testing focused on sharpening critical thinking, eliminating waste, and identifying problems that threaten the value of the product or the project, and the principles can be adapted to any development approach. If you need help with testing, please feel free to get in touch.

This post was touched up to cover some nicks in the paint on January 31 and February 1, 2023; then again on July 16, 2024.

3 replies to “Four (and More) Questions for Testers to Ask”

Leave a Comment