More of What Testers Find

Damn that James Bach, for publishing his ideas before I had a chance to publish his ideas! Now I’ll have to do even more work!

A couple of weeks back, James introduced a few ideas to me about things that testers find in addition to bugs.  He enumerated issues, artifacts, and curios.  The other day I was delighted to find an elaboration of these ideas (to which he added risks and testability issues) in his blog post called What Testers Find.  Delighted, because it notes so many important things that testers learn and report beyond bugs.  Delighted, because it gives me an opportunity and an incentive to dive into James’ ideas more deeply. Delighted, because it gives us all a chance to explore and identify a much richer view of testing than the simplistic notion that “testers find bugs”.

Despite the fact that testers find much more than bugs, let’s start with bugs.  James begins his list of what testers find by saying

Testers find bugs. In other words, we look for anything that threatens the value of the product.

How do we know that something threatens the value of the product?  The fact is, we don’t know for sure.  Quality is value to some person, and different people will have different perceptions of value.  Since we don’t own the product, the project, or the business, we can’t make absolute declarations of whether something is a bug or whether it’s worth fixing.  The programmers, the managers, and the project owner will make those determinations, and often they’re running in different directions.  Some will see a problem as a bug; some won’t.  Some won’t even see a problem. It seems like the only certain thing here is uncertainty.  So what can we testers do?

We find problems that might threaten the value of the product to some person who matters. How do we do that? We identify quality criteria–aspects of the product that provide some kind of value to customers or users that we like, or that help to defend the product from users that we don’t like, such as unethical hackers or fraudsters or thieves.  If we’re doing a great job, we also to account for the fact that users we do like will make mistakes from time to time.  So defending value also means making the product robust to human ineptitude and imperfection.  In the Heuristic Test Strategy Model (which we teach as part of the Rapid Software Testing course), we identify these quality criteria:

  • Capability (or functionality)
  • Reliability
  • Usability
  • Security
  • Scalability
  • Performance
  • Installability
  • Compatibility
  • Supportability
  • Testability
  • Maintainability
  • Portability
  • Localizability

In order to identify threats to the quality of the product, we use oracles.  Oracles are heuristic (useful, fast, inexpensive, and fallible) principles or mechanisms by which we recognize problems.  Most oracles are based on the notion of consistency.  We expect a product to be consistent with

  • History (the product’s own history, prior results from earlier test runs, our experience with the product or other products like it…)
  • Image (a reputation our development organization wants to project, our brand identity,…)
  • Comparable products (products like this one that we develop, competitors’ products, test programs or algorithms,…)
  • Claims (things that important people say about the product, requirements, specifications, user documentation, marketing material,…)
  • User expections (what reasonable people might anticipate the product could or should do, new features, fixed bugs,…)
  • Product (behaviour of the interface and UI elements, values that should be the same in different views,…)
  • Purpose (explicitly stated uses of the product, uses that might be implicit or inferred from the product’s design, no excessive bells and whistles,…)
  • Standards (relevant published guidelines, conventions for use or appearance for products of this class or in this domain, behaviour appropriate to the local market,…)
  • Statutes (relevant laws, relevant regulations,…)

In addition to these consistency heuristics, there’s an inconsistency heuristic too:  we’d like the product to be inconsistent with patterns of problems that we’ve seen before.  Typically those problems are founded in one of the consistency heuristics listed above. Yet it’s perfectly reasonable to observe a problem and recognize it first by its familiarity. We’ve seen lots of testers do that over the years.

We encourage people do come up with their own lists, or modifications to ours. You don’t have to use Heuristic Test Strategy Model if it doesn’t work for you.  You can create your own models for testing, and we actively encourage people who want to become great testers to do that.  Testers find models, ways of looking at the product, the project, and testing itself, in the effort to wrestle down the complexity of the systems we’re testing and the approaches that we need to test them.

In your context, do you see a useful distinction between compatibility (playing nice with other programs that happen to co-exist on the system) and  interoperability (working well with programs with which your application specifically interacts)?  Put interoperability on your quality criteria list.  Is accessibility for disabled users so important for your product that you want to highlight it in a separate quality criterion?  Put it on your list. Recently, James noticed that explicablility is a consistency heuristic that can act as an oracle too:  when we see behaviour we can’t explain or make sense of, we have reason to suspect that there might be a problem.  Testers find factors, relevant and material aspects of our models, products, projects, businesses, and test strategies.

When testers see some inconsistency in the product that threatens one or more of the quality criteria, we report.  For the report to be relevant and meaningful, it must link quality criteria, oracles, and risk in ways that are clear, meaningful, and important to our clients. Rather than simply noticing an inconsistency, we must show why the inconsistency threatens some quality criterion for some person who matters.  Establishing and describing those links in a chain of logic from the test mission to the test result is an activity that James and I call test framing.  So:  Testers find frames, the logical relationships between the test mission, our observations of the product, potential problems, and why we think they might be problems. James gave an example of a bug (“a list of countries in a form is missing ‘France'”). That might mean a minor usabilty problem based on one quality criterion, with a simple workaround (the customer trying to choose a time zone from a list of countries presented as examples; so pick Spain, which is in the same time zone). Based on another criterion like localizability, we’d perceive a more devastating problem (the customer is trying to choose a language, so despite the fact that the Web site has been translated, it won’t be presented in French, cutting our service off from a nation of 65 million people).

In finding bugs, testers find many other things too.  Excellent testing depends on our being able to identify and articulate what we find, how we find it, and how we contextualize it. That’s an ongoing process.  Testers find testing itself.

And there’s more, if you follow the link.

8 replies to “More of What Testers Find”

  1. I’m new to the quality assurance and testing roles, but I’ve been a “customer” of it in the past (both as a programmer and as a project manager).

    I really like the concepts that you talk about on this blog, especially in this particular post about relating the bugs/errors/whatevers back to *quality criteria*.

    Fantastic stuff. Looking forward to more.

  2. I’ve been a test analyst for over 18 years and I never get tired of testing because it is one of the most important phase of SDLC because it the last phase where we have to ensure that the newly built/new enhanced system is mature enough to implement in production e.g. is it fast enough to satisfy our customers measurement of “fast” is it bug free, is it user friendly. The things to test are endless really to ensure the system is mature. So yes I feel I really give significant contribution to the quality of the system. And yes sometimes I feel some companies treat testers like we don’t matter.

    Michael replies: I’d want to be careful about some of the things you’ve said here.

    If you or your managers think of testing is the last phase of the SDLC, then you’ll lose opportunities to discover information earlier. That information might be valuable.

    Second, I don’t really care whether the system is “mature” or not. Mature is too loaded a word, and too vague. What I care about is whether there are problems in the product that threaten its value to my client and to my client’s customers. That might include bugs or performance problems, but it also includes questions about what the client and the customer value. A product could perform perfectly quickly with no bugs, but it also have to provide some kind of service that the customer wants.

    As testers, we don’t matter in the sense that it’s not our concept of value or “bug” that’s important, but the client’s.

    Furthermore, for me testers are like CIA or FBI because we are constantly chasing or looking for criminals which in testing world the dreaded BUGS. Funny? or Yay? You tell me!

    I’m nervous about the police metaphor. We’re not here to enforce the law or to be the judge or jury. So I would prefer to think of ourselves as investigators, rather than cops.


Leave a Comment