Very Short Blog Posts (1): “We Don’t Have Time for Testing”

When someone says “We don’t have time for testing”, try translating that into “We don’t have time to think critically about the product, to experiment with it, and to learn about ways in which it might fail.” Then ask if people feel okay about that.

7 replies to “Very Short Blog Posts (1): “We Don’t Have Time for Testing””

  1. When someone tells me they don’t have time for testing I just look at them and say “Then how much money are you willing to lose.”

    Jim Hazen

  2. Reminds me of the old Zen adage: “You should sit in meditation for twenty minutes every day – unless you’re too busy; then you should sit for an hour.”
    It makes you wonder what other shortcuts they may have taken , and the frustrations that is has caused them or will in the future.

  3. I just say, “Great, let’s ship it!” That’s usually enough for them to say, “Well… let’s not make any rash decisions.” Funny how that works out.

  4. With short sightedness, combined with multiple layers of management, carefully blended with a dash of fear of telling your superior “no” or more accurately, “yes but here’s when I can and what I need to do that” – often in practice they are going to go ahead anyway without the requisite testing. Endemic in this industry!

    Its usually version 2 that you get the opportunity to genuinely discuss sensible testing scope after fingers are burnt. That is the time when the “Total Cost of Ownership” and “Technical Debt” conversations come to the fore.

    Michael replies: This is why it’s so important for testers to learn how to tell the three-part testing story, and to contextualize that with respect to risk for the product, the project, and the business. A nice quote from Carl Richards: “Risk is what’s left over after we think we’ve thought of everything.”

    That said, those Test Managers who promote 100% exhaustive coverage can be their own worst enemy too, and discredit the craft.

    By promoting it, or by expressing a belief that it’s possible?

  5. I think that this is potentially a very powerful technique, in that it uncovers the unstated assumptions that we need to accept in order to go along with a decision to prioritise an earlier release date over achieving good enough test coverage.

    Michael replies: Yes—yet another job of a good tester is dig up buried assumptions and bring them to light. Be careful, though: since we’re in service to the product owners, “good enough” is their decision, not ours. We can certainly inform their notions of good enough, though.

    I can see some good potential for useful conversations as a result.

    In the past I have used the “Well we are going to test it anyway” approach, and that has worked OK. But the context has always been that we are dealing with people who understand and respect the test team, and the way we work.

    One would have to be careful there too. If you find an important bug in the area where you’re looking, you’ll be a hero—but if you miss one because you were looking in the wrong place against your client’s will, you’ll be a goat. So I agree thta your most important observation here is the potential for useful conversations.

    Thanks for the comment.

    • Hi Michael,

      The context that I work in is one of ongoing product development, rather than bespoke software development. In general we have a real track record with the products being tested – and people recognise that and know that when we say “Well we are going to test it anyway”, that is not really us going off the reservation and tossing our toys away, but instead a strong signal that we need to talk more.

      And yes – completely accept that ultimately good enough is the decision of the product owner. But I always expect testers to have an opinion here, and to voice it when they need to.



  6. If you use the Agile system there is not choice but to test. You can’t deliver even a start of a UI until it’s tested. If you follow Agile directly then NOTHING gets to the field without testing. There should be no option.

    Michael replies: That’s the theory, if you choose to express it simply enough. However, I’ve noticed a few things. First, you can declare whatever you like as being Agile or not, but other people will do what they will. (As an excellent example, I seem to remember something important in XP about Sustainable Pace—which means that I should never hear from testers who are feeling overwhelmed because the programmers outpacing them. But I do.) Second, some things always get to the field without testing. You cannot test every possible valid value. The valid values are in an intractably large set, except for the most trivial programs—and the set of invalid values is infinite. You can’t test for every combination of values, timing, sequence. You usually can’t test for every conceivable platform that your software will run on. You usually can’t model every user, nor every condition those users will find themselves in. And while you might be very assiduous about testing your own code, do you test the code of the third-party libraries upon which it depends?

    But that’s okay—or at least it’s inevitable. We cannot test everything, since no one knows how to do it, no one has the infinite time it would take, and for sure no one wants to fund it. The challenge, and the interesting part is to figure out what’s so important that it must be tested, what’s so trivial that it’s not worth bothering about—and that super-tricky bit in between, which is where testing and management and developer skill and the risk all intersect.


Leave a Comment