A little while ago, I took a look at what happens when a check runs red. Since then, comments and conversations with colleagues emphasized this point from the post: it’s overwhelmingly common first to doubt the red result, and then to doubt the check. A red check almost provokes a kind of panic for some testers, because it takes away a green check’s comforting—even narcotic—confirmation that Everything Is Going Just Fine.
Skepticism about any kind of test result is reasonable, of course. Before delivering painful news, it’s natural and responsible for a tester to examine the evidence for it carefully. All software projects—and all decisions about quality—are to some degree loaded with politics and emotions. This is normal.
When a tester’s technical and social skills are strong, and self-esteem is high, those political and emotional considerations are manageable. When we encounter a red check—a suggestion that there might be a problem in the product—we must be prepared for powerful feelings, potential controversy, and cognitive dissonance all around. When people feel politically or emotionally vulnerable, the cognitive dissonance can start to overwhelm the desire to investigate the problem.
Several colleague have recalled circumstances in which intermittent red checks were considered sufficiently pesky by someone on the project team—even by testers themselves, on occasion—that the checks were ignored or disabled, as one might do with a cooking detector.
So what happens when checks consistently return “green” results?
As my colleague James Bach puts it, checks are like motion detectors around the boundaries of our attention. When the check runs green, it’s easy to remain relaxed. The alarm doesn’t sound; the emergency lighting doesn’t come on; the dog doesn’t bark. If we’re insufficiently attentive and skeptical, every green check helps to confirm that everything is okay.
Kirk and Miller identified a big problem with confirmation:
Most of the technology of “confirmatory” non-qualitative research in both the social and natural sciences is aimed at preventing discovery. When confirmatory research goes smoothly, everything comes out precisely as expected. Received theory is supported by one more example of its usefulness, and requires no change. As in everyday social life, confirmation is exactly the absence of insight. In science, as in life, dramatic new discoveries must almost by definition be accidental (“serendipitous”). Indeed, they occur only in consequence of some mistake.
Kirk, Jerome, and Miller, Marc L., Reliability and Validity in Qualitative Research (Qualitative Research Methods). Sage Publications, Inc, Thousand Oaks, CA, 1985.
It’s our relationship between the checks and our models of them that matters here. When we have unjustified trust in our checks, we have the opposite problem that we have with the cooking detector: we’re unlikely to notice that the alarm doesn’t go off when it should. That is, we don’t pay attention.
The good news is that being inattentive is optional. We can choose to hold on to the possibility that something might be wrong with our checks, and to identify the absence of red checks as meta-information; a suspicious silence, instead of a comforting one. The responsible homeowner checks the batteries on the smoke alarm, and the savvy explorer knows when to say “The forest is quiet tonight… maybe too quiet.”
By putting variation into our testing, we rescue ourselves from the possibility that our checks are too narrow, too specific, cover too few kinds of risk. If you’re aware of the possibility that your alarm clock might fail to wake you, you’re more likely to take alternative measures to avoid sleeping too long.
Valuable conversations with James Bach and Chris Tranter contributed to this post.