Why Would a User Do THAT?

If you’ve been in testing for long enough, you’ll eventually report or demonstrate a problem, and you’ll hear this:

“No user would ever do that.”

Translated into English, that means “No user that I’ve thought of, and that I like, would do that on purpose, or in a way that I’ve imagined.” So here are a few ideas that might help to spur imagination.

  • The user made a simple mistake, based on his erroneous understanding of how the program was supposed to work.
  • The user had a simple slip of the fingers or the mind—inadvertently pasting a letter from his mother into the “Withdrawal Amount” field.
  • The user was distracted by something, and happened to omit an important step from a normal process.
  • The user was curious, and was trying to learn about the system.
  • The user was a hacker, and wanted to find specific vulnerabilities in the system.
  • The user is confused by the poor affordances in the product, and at that point was willing to try anything to get his task accomplished.
  • The user was poorly trained in how to use the product.
  • The user didn’t do that. The product did that, such that the user appeared to do that.
  • Users actually do that all the time, but the designer didn’t realize it, so product’s design is inconsistent with the way the user actually works.
  • The product used to do it that way, but to the user’s surprise now does it this way.
  • The user was looking specifically for vulnerabilities in the product as a part of an evaluation of competing products.
  • The product did something that the user perceived as unusual, and the user is now exploring to get to the bottom of it.
  • The user did that because some other vulnerability—say, a botched installation of the product—led him there.
  • The user was in another country, where they use commas instead of periods, dashes instead of slashes, kilometres instead of miles… Or where dates aren’t rendered the way we render them here.
  • The user was testing the product.
  • The user didn’t realize this product doesn’t work the way that product does, even though the products have important and relevant similarities.
  • The user did that, prompted by an error in the documentation (which in turn was prompted by an error in a designer’s description of her intentions).
  • To the designer’s surprise, the user didn’t enter the data via the keyboard, but used the clipboard or a programming interface to enter a ton of data all at once.
  • The user was working for another company, and was trying to find problems in an active attempt to embarrass the programmer.
  • The user observed that this sequence of actions works in some other part of the product, and figured that the same sequence of actions would be appropriate here too.
  • The product took a long time to respond, the user got impatient, and started doing other stuff before the product responded to his earlier request.

And I’m not even really getting started. I’m sure you can supply lots more examples.

Do you see? The space of things that people can do intentionally or unintentionally, innocently or malevolently, capably or erroneously, is huge. This is why it’s important to test products not only for repeatability (which, for computer software, is relatively easy to demonstrate) but also for adaptability. In order to do this, we must do much more than show that a program can produce an expected, predicted result. We must also expose the product to reasonably foreseeable misuse, to stress, to the unexpected, and to the unpredicted.

15 replies to “Why Would a User Do THAT?”

  1. More importantly, we need to show to companies *why* they need to care about these different users.

    Michael replies: Yes, I agree—although I’d try to be precise and say why they might choose to care about these different users. It’s always the prerogative of an organization or a product owner to make choices about whom they intend to satisfy, and whom they don’t.

  2. Michael replies: Yes, I agree—although I’d try to be precise and say why they might choose to care about these different users. It’s always the prerogative of an organization or a product owner to make choices about whom they intend to satisfy, and whom they don’t.

    Precisely. Too often these kinds of issues are not chosen to be fixed or design suggestions to be considered. And most often the choices are based on limited information about the product and users.

  3. Lol. Yeah, the knee-jerk reaction of devs

    Michael replies: Well… it might be worth thinking empathetically or compassionately. As James Bach suggests, another translation of “no user would ever do that” might be “I’d rather you not call this a bug until we’ve had a chance to discuss it”, or “Please give me a moment to get over my surprise and disappointment that we’ve found a problem.”

    I usually just slightly cock my head to the right, raise one (Spockish) eyebrow and let the question hang in the room for about 20 seconds. The dev then bows his head, turns around and fixes the issue mumbling something like “*grumble*…of-course the stupid user will certainly do… *grumble*”.

    Ah. Consistent with my second suggestion just above.

    I think the answer is usually self explanatory but I like your list. It’s something that you can use as a reminder-checklist when you get to “emulating a user”.

    Yes; that was a big motivation for the post—arguably the most important. How something might be interpreted as a problem can lead to ideas about how we might anticipate a problem—and test for it.

    Thanks for the comment, Oliver.

  4. Especially when testing new products or features. There’s nothing worse than assuming a user won’t use something a certain way, when nobody has even used it before. Too often, design is driven by the designer’s own personal “golden path”.

    I worked for a startup that created a program to measure structures from aerial images. Something that had never been done at the time. The designers spent years creating it, then as soon as it went on the operation floor, it started doing really weird things. When mentioned to the developers, their response was “well stop doing that, we didn’t intend for you to do that.”

  5. Tom, I refer to that as the Henny Youngman Bug Resolution (in reference to the old Henny Youngman joke: “Doctor, it hurts when I do this.” “Well, then don’t do that!”)

  6. This one is GREAT! Reminds me once again to validate and validate again what users actually do. Not what we THINK they will do. Not what the designer INTENDED them to do… but what do the users ACTUALLY do? Oh and by the way, if you have more than one user, you probably want to observe more than one user!

  7. Thanks, now I finally know how to respond effectively when I hear someone say ‘no user would ever do that’ or ‘that won’t happen’ or ‘that would be really stupid’ or any other variant on the theme…

    Next (related) question: how to react when I hear ‘yeah, but it’s only a few people of the business that use this interface, so we can just tell them that they cannot do it like that; it needs to be part of their procedure’?

    Michael replies: How to react when you hear a statement? Your first reaction will probably be spontaneous and emotional. If that’s the case, you don’t need me or anyone else to tell you how to react; you’ll react.

    After you’ve reacted (or not), it’s important to ask what your feelings (or your lack of them) might be pointing to. For example: if you have a strong reaction of impatience and frustration with the people you’re speaking to, is that because more than just a few people will suffer loss or harm or annoyance? Is it because only a few people will be harmed, but the harm for those people will be severe? Is your reaction confusion, such that you can’t understand why the managers or programmers would think what they’re thinking? If so, have you articulated your perception of the risks clearly and thoroughly? Or might it be the case that for an internally facing program (as Joel Spolsky points out) there’s no percentage gold-plating the user interface, or in fixing a problem that looks bad but isn’t? Is your client’s reaction reasonable?

    Or is your reaction disgust, because you have a perfection rule that says that every problem, no matter how trivial, must be fixed? Are your feelings thereby misleading you?

    Or is your reaction fear, since you can see plausible, serious risks that could be manifested even when a few people—thoughtful, careful, and well-trained might forget about or err in their procedures?

    The key is to tell a plausible, credible story about risk to people, starting with yourself. If that risk remains serious in your mind, sharpen that story and tell it to your clients. But also listen, since they may quite reasonably decline to fix a problem; or they might give you hints that they’re not aware of the risks that you have in mind.

  8. […] Link to quote here: […]


Leave a Comment