DevelopsenseLogo

FEW HICCUPPS

Several years ago, I wrote an article for Better Software Magazine called Testing Without a Map. The article was about identifying and applying oracles, and it listed several dimensions of consistency by which we might find or describe problems in the product. The original list came from James Bach.

Testers often say that they recognize a problem when the product doesn’t “meet expectations”. But that seems empty to me; a tautology. Testers can be a lot more credible when they can describe where their expectations come from. Perhaps surprisingly, many testers struggle with this, so let’s work through it.

Expectations about a product revolve around desirable consistencies between related things.

  • History. We expect the present version of the system to be consistent with past versions of it.
  • Image. We expect the system to be consistent with an image that the organization wants to project, with its brand, or with its reputation.
  • Comparable Products. We expect the system to be consistent with systems that are in some way comparable. This includes other products in the same product line; competitive products, services, or systems; or products that are not in the same category but which process the same data; or alternative processes or algorithms.
  • Claims. We expect the system to be consistent with things important people say about it, whether in writing (references specifications, design documents, manuals, whiteboard sketches…) or in conversation (meetings, public announcements, lunchroom conversations…).
  • Users’ Desires. We believe that the system should be consistent with ideas about what reasonable users might want. (Update, 2014-12-05: We used to call this “user expectations”, but those expectations are typically based on the other oracles listed here, or on quality criteria that are rooted in desires; so, “user desires” it is. More on that here.)
  • Product. We expect each element of the system (or product) to be consistent with comparable elements in the same system.
  • Purpose. We expect the system to be consistent with the explicit and implicit uses to which people might put it.
  • Statutes. We expect a system to be consistent with laws or regulations that are relevant to the product or its use.

I noted that, in general, we recognize a problem when we observe that the product or system is inconsistent with one or more of these principles; we expect this from the product, and when we get that, we have reason to suspect a problem.

(If I were writing that article today, I would change expect to desire, for reasons outlined here.)

“In general” is important. Each of these principles is heuristic. Oracle principles are, like all heuristics, fallible and context-dependent; to be applied, not followed. An inconsistency with one of the principles above doesn’t guarantee that there’s a problem; people make the determination of “problem” or “no problem” by applying a variety of oracle principles and notions of value. Our oracles can also mislead us, causing us to see a problem that isn’t there, or to miss a problem that is there.

Since an oracle is a way of recognizing a problem, it’s a wonderful thing to be able to keep a list like this in your head, so that you’re primed to recognize problems. Part of the reason that people have found the article helpful, perhaps, is that the list is memorable: the initial letters of the principles form the word HICCUPPS. History, Image, Claims, Comparable products, User expectations (since then, changed to “user desires”), Product, Purpose, and Statutes. 

With a little bit of memorization and practice and repetition, you can rattle off the list, keep it in your head, and consult it at moment’s notice. You can use the list to anticipate problems or to frame problems that you perceive.
Another reason to internalize the list is to be able to move quickly from a feeling of a problem to an explicit recognition and description of a problem. You can improve a vague problem report by referring to a specific oracle principle. A tester’s report is more credible when decision-makers (program managers, programmers) can understand clearly why the tester believes an observation points to a problem.

I’ve been delighted with the degree to which the article has been cited, and even happier when people tell me that it’s helped them. However, it’s been a long time since the article was published, and since then, James Bach and I have observed testers using other oracle principles, both to anticipate problems and to describe the problems they’ve found. To my knowledge, this is the first time since 2005 that either one of us has published a consolidated list of our oracle principles outside of our classes, conference presentations, or informal conversations. Our catalog of oracle principles now includes:

  • Statutes and Standards. We expect a system to be consistent with relevant statutes, acts, laws, regulations, or standards. Statutes, laws and regulations are mandated mostly by outside authority (though there is a meaning of “statute” that refers to acts of corporations or their founders). Standards might be mandated or voluntary, explicit or implicit, external to the development group or internal to it.

    What’s the difference between Standards and Statutes versus Claims? Claims come from inside the project. For Standards and Statutes, the mandate comes from outside the project. When a development group consciously chooses to adhere to a given standard, or when a law or regulation is cited in a requirements document, there’s a claim that would allow us to recognize a problem. We added Standards when we realized that sometimes a tester recognizes a potential problem for which no explicit claim has yet been made.

    While testing, a tester familiar with a relevant standard may notice that the product doesn’t conform to published UI conventions, to a particular RFC, or to an informal, internal coding standard that is not controlled by the project itself.

    Would any of these things constitute a problem? At least each would be an issue, until those responsible for the product declare whether to follow to the standard, to violate some points in it, or reject it entirely.

    A tester familiar with the protocols of an FDA audit might recognize gaps in the evidence that the auditor desires.  Similarly, a tester familiar with requirements in the Americans With Disabilities Act might recognize accessibility problems that other testers might miss. Moreover, an expert tester might use her knowledge of the standard to identify extra cost associated with misunderstanding of the standard, excessive documentation, or unnecessary conformance.

  • Explainability. We expect a system to be understandable to the degree that we can articulately explain its behaviour to ourselves and others.If, as testers, we don’t understand a system well enough to describe it, or if it exhibits behaviour that we can’t explain, then we have reason to suspect that there might be a problem of one kind or another. On the one hand, there might be a problem in the product that threatens its value. On the other hand, we might not know the about the product well enough to test it capably. This is, arguably, a bigger problem than the first. Our misunderstanding might waste time by prompting us to report non-problems. Worse, our misunderstandings might prevent us for recognizing a genuine problem when it’s in front of us.

    Aleksander Simic, in a private message, suggests that the explainability heuristic extends to more members of the team than testers. If a programmer can’t explain code that she must maintain (or worse, has written), or if a development team has started with something ill-defined and confusion is moving slowly through the product, then we have reason to suspect, investigate, or report a problem. I agree with Aleksander. Any kind of confusion in the product is an issue, and issues are petri dishes for bugs.

  • World. We expect the product to be consistent with things that we know about or can observe in the world.Often this kind of inconsistency leads us to recognize that the product is inconsistent with its purpose or with an expectation that we might have had, based on our models and schemas.  When we’re testing, we’re not able to realize and articulate all of our expectations in advance of an observation. Sometimes we notice an inconsistency with our knowledge of the world before we apply some other principle.This heuristic can fail when our knowledge of the world is wrong; when we’re misinformed or mis-remembering. It can also fail when the product reveals something that we hadn’t previously known about the world.

There is one more heuristic that testers commonly apply as they’re seeking problems, especially in an unfamiliar product. Unlike the preceding ones, this one is an inconsistency heuristic:

  • Familiarity. We expect the system to be inconsistent with patterns of familiar problems.When we watch testers, we notice that they often start testing a product by seeking problems that they’ve seen before. This gives them some immediate traction; as they start to look for familiar kinds of bugs, they explore and interact with the product, and in doing so, they learn about it.Starting to test by focusing on familiar problems is quick and powerful, but it can mislead us. Problems that are significant in one product (for example, polish in the look of the user interface in a commercial product) may be less significant in another context (say, an application developed for a company’s internal users). A product developed in one context (for example, one in which programmers perform lots of unit testing) might have avoided problems familiar to other us in other contexts (for example, one in which programmers are less diligent).

    Focusing on familiar problems might divert our attention away from other consistency principles that are more relevant to the task at hand. Perhaps most importantly, a premature search for bugs might distract us from a crucial task in the early stages of testing: a search for benefits and features that will help us to develop better ideas about value, risk, and coverage, and will inform deeper and more thoughtful testing.Note that any pattern of familiar problems must eventually reduce to one of the consistency heuristics; if it was a problem before, it was because the system was inconsistent with some oracle principle.

Standards was the first of the new heuristics that we noticed; then Familiar problems. The latter threatened our mnenomic! For a while, I folded Standards in with Statutes, suggesting that people memorize HICCUPPS(F), with that inconsistent F coming at the end. But since we’ve added Explainability and World, we can now put F at the beginning, emphasizing the reality that testers often start looking for problems by looking for familiar problems. So, the new mnemonic: (F)EW HICCUPPS. When we’re testing, actively seeking problems in a product, it’s because we desire… FEW HICCUPPS.

This isn’t an exhaustive list. Even if we were silly enough to think that we had an exhaustive list of consistency principles, we wouldn’t be able to prove it exhaustive. For that reason, we encourage testers to develop their own models of testing, including the models of consistency that inform our oracles.

This article was first published 2012-07-23. I made a few minor edits on 2016-12-18, and a few more on 2017-01-26.

60 replies to “FEW HICCUPPS”

  1. The solution to the “familiarity” issue is to hire “fresh” testers. Testers that haven’t tested a similar system before.

    Michael replies: You seem to be suggesting that seeking patterns of familiar problems is a Bad Thing. That isn’t necessarily so; in fact, the “familiar problems” heuristic can be very powerful. The Bad Thing comes when the familiar problems heuristic reverses from a medium into a bias that overwhelms other approaches.

    I remember that for a certain computer game (can’t remember the name), a team of testers consisted of office managers that only played solitaire. (they also addressed the usability issue as well).

    And how did that work out?

    It’s good to have some people test with limited preconceptions (consider the “Fresh Eyes Find Failure” lesson from Lessons Learned in Software Testing. It’s also pragmatic to spend some of your testing time seeking problems you’ve seen before.

    Reply
  2. Thanks MB.

    Wondering, do you have a real world example of the use of the World Heuristic? I’ve been struggling with this one since JB mentioned it to us in our RST class. When thinking about it I always come back to Comparable Products first.

    I recall the very brief example being a door. If it doesn’t open we could recognise a potential problem due to our ‘world’ knowledge of doors. But once again I find myself at Comparible Products first.

    Cheers,
    DG

    Reply
  3. Thanks for an updated list, I used HICCUPPS last weekend at the RTI. Will this updated list make it into the latest RST slides?

    (BTW your link to your article Testing Without a Map is broken.)

    Reply
  4. […] HTSM: Heuristic Test Strategy Model (James Bach/Michael Bolton) and other heuristics like risk and consistency to inform your […]

    Reply

Leave a Comment