Testers investigate problems and risk. Other people manage the project, design the product, and write the code. As testers, we participate in that process, but in a special way and from a special perspective: it’s our primary job to anticipate, seek, and discover problems.
We testers don’t prevent problems in the product; we don’t design, build, fix, or manage the product. We’re evaluating things that other people are bringing to us. We can’t prevent problems in those things.
(“But Michael! Can’t testers fix code?! Can’t testers make suggestions about the design?!” Yes, yes, yes. Some testers, sometimes, dive in and fix a line or two of code. Some testers provide a design suggestion that gets taken up by the designer. Some testers spend half of their time working on the build process, rather than obtaining experience with the product. When testers do that, they have temporarily abandoned the testing role, and have entered the developer or designer role. They have moved, for the moment, from the critic role to the maker role. More on roles here.
There’s no rule against a goalie running downfield and attempting to score a goal. It’s important for the team to notice that, in that moment, the goalie has abandoned the goal; that anyone attempting to defend the goal can’t use hands on the ball; that the the goalie probably doesn’t have the goal-scoring skills of the forwards; and, just as probably, no one else on the field has the level of goaltending skill that the goalie has. Things might be okay… or can go badly wrong.)
Whatever we’re presented with (code, prototypes, designs, requirement documents, specifications, ideas, concepts….) probably comes with problems, and it’s our special stance to have faith that there are problems to be found. We haven’t prevented those problems. They exist already. The trick is to help the people building the product to recognize them.
That is: there is one problem that we can definitely help to prevent, and that’s ongoing oblivion to problems. We may help to prevent existing problems from going any farther, by discovering bugs, misunderstandings, issues, and risks, and bringing them to light. With our help, the people who build and manage the project can address the problems we have revealed, and prevent worse problems down the line. We don’t “put the quality in”. Let’s give due credit to the people who do.
Over the last while, I’ve been working with clients that are “shifting left”, “going Agile”, “doing DevOps”, or “getting testers involved early”. Typically this takes the form of having a tester present for design discussions, planning meetings, grooming sessions, and the like.
This is usually a pretty good idea; if there is no one in the testing role, people tend not to think very deeply about testing—or about problems or risk. That’s why, even if you don’t have someone called “tester” on the team, it’s an awfully good idea to have someone in the testing role and the testing mindset. Here, I’ll call that person “tester”.
Alas, I’ve sometimes observed that, once invited to the meetings, testers are sometimes uncertain about what they’re doing there.
A while back, I proposed at least four things for testers to do in planning meetings: learning; advocating for testability; challenging what we’re hearing; and establishing our roles as testers. These activities help to enable sensemaking and critical thinking about the product and the project. How can testers do these things successfully? Here’s a set of targeted questions.
What are we building? Part of our role as testers is to come to a clear understanding of the system, product, feature, function, component, or service that we’re being asked to test. (I’ll say “product” from here on, but remember I could be referring to anything in the list.) We could be talking about the product itself or a representation of it. We could be looking at a diagram of it;reviewing a document or description of it; evaluating a workflow; playing with a prototype. Asking for any these can help if we don’t have them already. A beneficial side effect is helping to refine everyone’s understanding of the product—and how we’d achieve successful completion of the project or task.
So we might also ask: What will be there when we’ve built it? What are the bits and pieces? (Can we see a diagram?) What are the functions that the product offers; what should the product do? What gets input, processed, and output? (Do we have a data dictionary?) What does the product depend upon? What depends on the product? (Has someone prepared a list of dependencies? A list of what’s supported and what isn’t?)
For whom are we building it? If we’re building a product, we’re ultimately building it for people to use. Sometimes we make the mistake of over-focusing on a particular kind of user: the person who is immediately encountering the product, with eyes on screen and fingers on keyboard, mouse, or glass. Often, however, that person is an agent for someone else—for a bank teller’s application, think of the bank teller, but also think of the customer on the other side of the counter; the bank’s foreign exchange traders; the bank teller’s manager. Beyond using the product, there are other stakeholders: those who support it, connect to its APIs, test it, document it, profit from it, or defend it in court.
So we might also ask: Who else is affected by this product? Who do they work for, or with? What matters to them? (These questions are targeted towards operations and value-related testability.) Who will support the product? Maintain it? Test it? Document it?
What could go wrong? The most important questions for testers to raise are questions about problems and risks. Developers, designers, business people, or others might discuss features or functions, but people who are focused on building a product are not always focused on how things could go badly. Switching from a builder’s mindset to a tester’s mindset is difficult for builders. For testers, it’s our job.
So we might also ask: What Bad Things could happen? What Good Things could fail to happen? Under what conditions might they happen or not happen? What might be missing? What might be there when it shouldn’t be there? And for whom are we not building this product—like hackers or thieves?
When something goes wrong, how would we know? Once again, this is a question about testability, and also a question about oracles. As James Bach has said, “software testing is the infinite art of comparing the invisible to the ambiguous to prevent the unthinkable from happening to the anonymous”. For any non-trivial program, there’s a huge test space to cover, and bugs and failures don’t always announce themselves. Part of our job is to think of the unthinkable and to help those invisible things to become visible so that we can find problems—ideally in the lab before we ship. Some problems might escape the lab (or our continuous deployment checks, if we’re doing that).
So we might also ask: How might we miss something going wrong? What do we need for intrinsic testability—at the very least, log files, scriptable interfaces, and code that has been reviewed, tested, and fixed as it’s being built. And what about subjective testability? Do we have the domain knowledge to recognize problems? What help might we need to obtain that? Do we have the specialist skills—in (for example) security, performance, or tooling—on the team? Do we need help there? If we’re working in a DevOps context, doing live site testing or testing in production, how would we detect problems rapidly?
In sprint planning meetings, or design discussions, or feature grooming sessions, questions like these are important. Questions focused on problems don’t come naturally to many people, but asking such questions should be routine for testers. While everyone else is envisioning success, it’s our job to make sure that we’re anticipating failure. When everyone else is focused on how to build the product, it’s important for us to keep an eye on how the entire team can study and test it. When everyone else is creatively optimistic, it’s important for us to be pragmatically pessimistic.
None of the activities in planning and review replace testing of the product that is being built. But when we participate in raising problems and risks early on, we can help the team to prevent those problems—including problems that make testing harder or slower, allowing more bugs to survive undetected. Critical thinking now helps to enable faster and easier testing and development later.
Now a word from our sponsor: I help testers, developers, managers, and teams through consulting and training in Rapid Software Testing (RST). RST is a skill set and a mindset of testing focused on sharpening critical thinking, eliminating waste, and identifying problems that threaten the value of the product or the project, and the principles can be adapted to any development approach. If you need help with testing, please feel free to get in touch.
This post was touched up to cover some nicks in the paint on January 31 and February 1, 2023.
3 replies to “Four (and More) Questions for Testers to Ask”
[…] Four (and More) Questions for Testers to Ask – Michael Bolton – http://www.developsense.com/blog/2018/03/four-and-more-questions/ […]
[…] testability than James Bach’s model, demonstrating how broad and diverse the subject can be.Four (and more) questions for testers to ask(At least) four things for Testers to do in planning meetingsTestability cannot be achieved unless […]
[…] Four and more questions […]