DevelopsenseLogo

You’ve Got Issues

What’s our job as testers? Reporting bugs, right?

When I first started reading about Session-Based Test Management, I was intrigued by the session sheet. At the top, there’s a bunch of metadata about the session—the charter, the coverage areas, who did the testing, when they started, how long it took, and how much time was spent on testing versus interruptions to testing. Then there’s the meat of the session sheet, a more-or-less free-form section of test notes, which include activities, observations, questions, musings, ideas for new coverage, newly recognized risks, and so forth. Following that, there’s a list of bugs. The very last section, at the bottom of the sheet, sets out issues.

“What’s an issue?” I asked James. “That’s all the stuff that’s worth reporrting that isn’t a bug,” he replied. Hmmm. “For example,” he went on, “if you’re not sure that something is a bug, and you don’t want to commit it to the bug-tracking system as a bug, you can report it as an issue. Say you need more information to understand something better; that’s an issue. Or you realize that while you’ve been testing on one operating system, there might be other supported operating systems that you should be testing on. That’s an issue, too.”

That was good enough as far as it went, but I still didn’t quite get the idea in a comprehensive way. The information in the “test notes” section of session sheet is worth reporting too. What distinguishes issues from all that other stuff, such that “Issues” has its own section on parallel with “Bugs”?

Parallelism saves the day. In the Rapid Software Testing class, we teach that a bug is anything that threatens the value of the product. (Less formally, we also say that a bug is something that bugs somebody… who matters.) At one point, a definition came to me: if a bug is anything that threatens the value of the product, an issue is anything that threatens the value of our testing. In our usual way, I transpected on this with James, and we now say that an issue is anything that threatens the value of the project, and in particular the test effort. Less formally and more focused on testing, an issue is anything that slows testing down or makes it harder. If testing is about making invisible problems visible, then an issue is anything that gives problems more time or more opportunities to hide.

When believe that we we see a bug, it’s because there’s an oracle at work. Oracles—those principles or mechanisms by which we recognize a problem—are heuristic. A heuristic helps us to solve a problem or make a decision quickly and inexpensively, but heuristics aren’t guaranteed to work. As such, oracles are fallible. Sometimes it’s pretty clear to us that we’re seeing a bug in a product, the program seems to crash in the middle of doing something. Yet even that could be wrong; maybe something else running on the same machine crashed, and took our program down with it. A little investigation shows that the product crashes in the same place twice more. At that point, we should have no compunction reporting what we’ve seen as a bug in the product.

An issue may be clear, or it may be something more general and less specific. A few examples of issues:

  • As you’re testing, you see behaviour in the new version of the product that’s inconsistent with the old version. The Consistency with History oracle tells you that you might be seeing a problem here, yet one could make the case that either behaviour is reasonable. The specification that you’re working from is silent or ambiguous on the subject. So maybe you’ve got a bug, but for sure you have an issue.
  • While reviewing the architecture for the system, you realize that there’s a load balancer in the production environment, but not in the test environment. You’ve never heard anyone talk about that, and you’re not aware of any plans to set one up. Maybe it’s time to identify that as an issue.
  • You sit with a programmer for a few minutes while she sketches out the structure of a particular module to help identify test ideas. You copy the diagram, and take notes. At the end of the meeting, you ask her to look the diagram over, and she agrees that that’s exactly what she meant. You reflect on it for a while, and add some more test ideas. You take the diagram to another programmer, one who works for her, and he points at part of the diagram and says, “Wait a second—that’s not a persistent link; that’s stateless.” You’ve found disagreement between two people making a claim. Since the code hasn’t been built for that feature yet, you can’t log it as a bug in the product, but you can identify it as an issue.
  • As you’re testing the application, a message dialog appears. There’s no text in the dialog; just a red X. You dismiss the dialog, and everything seems fine. It seems not to happen again that day. The next day, it happens once more, in a different place. Try as you might, you can’t replicate it. Maybe you can report it as an intermittent bug, but you can definitely record it as an issue.
  • A steady pattern of broken builds means that you wait from 10:00am until the problem is fixed—typically at least an hour, and often three or four hours. Before you’re asked, “Why is testing taking so long?” or “Why didn’t you find that bug?” report an issue.
  • You’ve been testing a new feature, and there are lots of bugs. 80% of your session time is being spent on investigating the bugs and logging them. This has a big impact on your test coverage; you only got through a small subset of the test ideas that were suggested by the session’s charter. The bugs that you’ve logged are important and you can expect to be thanked for that, but you’re concerned that managers might not recognize the impact they’ve had on test coverage. Raise an issue.
  • You’re a tester in an outsourced test lab in India. Your manager, under a good deal of pressure himself, instructs you to run through the list of 200 test cases that has been provided to him by the clueless North American telecom company, and to get everything done within three days. With practically every test you perform, you see risk. All the tests pass, if you follow them to the letter, but the least little experimentation shows that the application shows frightening instability if you deviate from the test steps. Still your boss insists that your mission is to finish the test cases. He’s made it clear that, for the next three days, he doesn’t want to hear anything from you except the number of tests that you’ve run per day. Do your best to finish them on schedule, but sneak a moment here and there to identify risks (consider a Moleskine notebook or an ASCII text file). When you’re done, hand him your list of bugs—and in email, send him your list of issues.
  • You’re a tester in a small development shop that provides customizable software for big banks. You have concerns about security, and you quickly read up on the subject. What you read is enough to convince you that you’re not going to get up to speed soon enough to test effectively for security problems. That’s an issue.
  • As a new tester in a company, you’ve noticed that the team is organized such that small groups of people tend to work in their own little silos. You can point to a list of a dozen high-severity bugs that appear to have been the result of this lack of communication. You can see the cost of these twelve bugs and the risk that there are more lurking. You recognize that you’re not responsible for managing the project, yet it might be a good idea to raise the issue to those who do.

Those are just a few examples. I’m sure you can come up with many, many more without breaking a sweat.

Teams might handle issues in different ways. You might like to collect an issues list, and put on a Big Visible Chart somewhere. Someone might become responsible for collecting and managing the issues submitted on index cards. Some issues might end up as a separate category in the bug tracking system (but watch out for that; out of sight, out of mind). Still others might get put onto the project’s risk list.

Some issues might get handled by management action. Some issues might get addressed by a straightforward conversation just after tomorrow morning’s daily standup. Someone might take personal responsibility for sorting out the issue; other issues might require input and effort from several people. And, alas, some issues might linger and fester.

When issues linger, it’s important not to let them linger without them being noticed. After all, an issue may have a terrible bug hiding behind it, or it may slow you down just enough to prevent you from finding a problem as soon as you can. Issues don’t merely present risk; they have a nasty habit of amplifying risks that are already there.

So, as testers, it’s our responsibility to report bugs. Even more importantly, it’s our responsibility to raise awareness of risk, by reporting those things that delay or interfere with our capacity to find bugs as quickly as possible: issues.

17 replies to “You’ve Got Issues”

  1. […] This post was mentioned on Twitter by Michael Bolton and testingfeeds, Albert Gareev. Albert Gareev said: RT @michaelbolton: Published: You've Got Issues http://bit.ly/gF2rKo What if a bug isn't quite a bug? #testing #softwaretesting #qa #agile […]

    Reply
  2. Michael,

    Great blog and I like this distinction:
    A bug is anything that threatens the value of the Product and an issue is anything that threatens the value of the project.

    Although you may argue that most things (anything?) that threatens the project, also threatens value of the product. Issues may lead to decisions which means a deviation from the original plan (which in it self may lead to new issues), which may mean a feature got sacrificed or given less attention as planned.

    But I want to go the other way:
    Often bugs also threaten the project. For example when it is blocking your test progress. But also since the quality of the product is one of the outcomes of the project. Anything that threatens the value of the product also threatens the value (or success) of the project and vice versa.

    Confusing? Perhaps, but I still like the distinction between bugs and issues. (I think) I understand what you mean although I cannot describe it as good as you. I guess it’s about the direct or most impacted thread.

    /Arjan

    Michael replies: Thank you for the comments, Arjan.

    My main point in writing the post is not to provide some new way of dividing problems into to groups. Instead, my most important point is to expand our notions of what kind of problems a tester might notice, and how the tester might report them. Making distinctions between bugs and issues is secondary; my primary goal here is to get ourselves focused on spotting issues as well as bugs.

    I don’t think it’s actually that confusing, because unlike computers, people don’t have to think in binary terms. Something can be both a bug and an issue at the same time. Something can threaten the value of the product and the value of the project. If we have trouble deciding whether some problem is a bug or an issue, it’s probably both. Don’t waste time; note its buggy nature in what ever form of problem tracking system you have (mental list, paper list, stack of index cards, ASCII text file, morning meeting, in-house database, enormous commercial bug tracking tool…), and note the problem’s issue-ish nature either there or in whatever other medium you have for dealing with issues.

    But either way, the point is not the classification or the tracking. The point is in raising awareness of the threats to value. Thus, another heuristic: Don’t let either this classification or the problems tracking system(s) to bury the problem or to lower awareness of it. Our over-arching mission is to provide information and raise awareness.

    Reply
  3. I’ve tried to get my teams to record “events” in the past (usually those one time only bugs that drive you nuts) and the result is that few get recorded

    Michael replies: Issue.

    because we didn’t have a good communication method with development.

    Issue.

    The activity was largely viewed as Non Value Add

    Issue.

    because our company’s QA Dev communication was less than ideal

    Issue.

    and most people were up to their necks in the standard bug & progress tracking

    Issue.

    that there was precious little time for more items to track.

    Issue.

    Issues tracking is great if you can get it just under bug reporting in your organizational priorities.

    Issue tracking (as I said in my reply to Arjan) isn’t really the issue (sorry). Issue awareness is.

    Above I’ve identified a bunch of things which (it seems to me) threaten the value of testing work. To what degree was your project team aware of them? To what degree was management aware of them? To what degree had these problems become part of The Way We Do Things Around Here, “normalization of deviance”?

    And to what degree were these problems allowing bugs to hide? Many issues are like blocking bugs; they’re problems—often seemingly mundane problems—that slow us down or outright prevent us from finding bugs that might be much more devastating. Thus the goal of identifying issues is not to bury them in a tracking system, but to put the floodlights on them so that they can be seen for what they are: threats to the effectiveness of the test organization, and therefore threats to the value of the product.

    Now, if management is fully aware of the issues and chooses not to do anything about them, that’s management’s prerogative. After that, even a jotted-down list of issues previously raised provides a ready answer to the question, “Why didn’t you find that bug?”

    Reply
  4. Michael,

    Nice clarification. “An issue is anything that threatens the value of the project.”

    Some little points.

    It reminds me of the feedback model of the SW development system from Jerry Weinberg’s Quality Software Management series. The Steering management pattern requires “issues” feedback in order to observe, compare, modify the controller behavior.

    For me, it is a HUGE insight into the organization’s cultural pattern to observe how “issues” are treated. When that behavior pattern changes, it is important to determine the why and significance of the change.

    Michael replies: I disagree about one thing you said, Griffin. Those aren’t little points; I’d argue that they’re enormously important. I agree, and thank you for raising them.

    A bug report reveals information. A reported issue reveals information too; often an issue provides information about how certain bugs got there in the first place. Other issues may go another level higher, revealing information about how issues came to be. The Five Whys exercise is a form of that kind of inquiry. In addition to the books you mentioned, Jerry’s Perfect Software and Other Illusions About Testing contains lots of instances of this kind of meta-observation and meta-information.

    Bug Advocacy is the skill of reporting and contextualizing a bug such that the client can most clearly and easily see the bug’s significance. The goal is to provide the client with the very best technical information we can, so that they’re able to make the best-informed business decisions possible. The reporting of bugs requires not only technical know-how, but also skill in risk assessment, rhetoric, politics, social dynamics, and so forth. Issues are often even more fraught, more political, more emotional—and they’re often not technical matters at all. Thus I would argue that it behooves us testers to learn Issue Advocacy as well.

    Reply
  5. Hi Michael,

    This article helps to understand the significance of creating issue awareness. Thanks for this useful post.

    Issues that are not identified with a bug#ticket# are usually NOT taken seriously.

    Michael replies: That’s an issue right there (see Griffin’s comment above). I mean, pause for a moment and think: issues that threaten testing’s ability to deliver valuable and timely information are not being addressed unless they’re expressed in terms of bug reports. Or to put it another way, we’re blinding the lookouts. Isn’t it a potentially devastating risk to the project?

    Unless the project shows signs of going down, people don’t give a hard look into these.

    Issue.

    If I send an email regarding these issues, it simply sits in the inbox of the concerned and no action is taken.

    Issue.

    This is probably because people who resolve issues that are not bugs do not get any credit.

    Issue.

    Creating awareness about the significance of these issues backed with data and powerful stats can certainly help a lot! How we present these issues is really important.

    I agree. That’s why one of the primary testing artifacts for Rapid Testers is a up-to-the-minute risk list.

    Regards,
    Aruna
    http:www.technologyandleadership.com
    “The intersection of Technology and Leadership”

    Thanks for writing, Aruna.

    Reply
  6. Michael,
    Thank you for the great explanation on the differences, and pointing out that testers really need to keep their eyes on the issues as well. I have actually tried to explain it to other people before, but you actually nailed it. And I totally agree on that issues are really more important to catch by testers.

    Michael replies: Thanks, Sigge.

    Last week I actually found a bug that really revealed a big issue. The simplest of configuration bug revealed an internal communications issue on the third-party vendors side. Though the big challenge I find in that and similar situations, is the actual making of the case, that this really is a potential issue and not just a minor configuration bug. In that sense, finding bugs and issues that relate to the same are both harder and easier in some sense to highlight.

    Yes; issues do present interesting dilemmas and dynamics. I hope to have more to say about this in the days ahead.

    Reply
  7. Bringing attention to possible issues with issue reporting. A few examples below.

    Co-workers with a Little Chicken syndrome. (Though often they just need some time and support to grow more confident.)

    Michael replies: Right, that’s an issue. And like all other issues, it represents a threat to the value of testing, and the timely, successful completion of the project. I like your proposed way of addressing it, although I’d add that coaching on how to report issues effectively and dispassionately might be another approach.

    Process Quality Auditors. “Step left, step right – Big Issue! Process is everything! Your job is to follow policies!”
    Hard time to deal with. Sometimes had to do double work: one piece to solve the actual problem, second piece to comply with the procedure.

    Yes, another issue. A potential problem with the product quality auditors is goal displacement. It’s manifestly not the tester’s job to follow policies; it’s the tester’s job—everyone’s job—to do valuable work for the organization. Policies should help with that, not interfere with it. Policy devoid of motivation and effectiveness is mere bureaucracy, a form of waste. That kind of policy is an issue. Dealing with people who promote that kind of policy is an issue. Both of those issues represent a threat to the timely, successful completion of the project.

    Self-appointed critics. “Sketching on the piece of paper is unprofessional”, “they should”, “you should’ve”..

    Ditto.

    I think, treating issues as “artefacts” or “deliverables” encourages behavior as in the examples described above.

    Issue.

    What is a deliverable? It seems to me that a deliverable is something that we deliver. James Bach, Jon Bach, and I have been working on a list of exploratory testing dynamics. The first section is called “Work Products”, and one could arguably call them “deliverables”. Yet notice that we’ve listed them as content, idea-stuff, rather than containers. When you mistake the container for the content, all kinds of bad things can start happening in a hurry. So one antidote to the issue you raise here is to keep everyone aware that the physical (printed sheets of paper) or near-physical (reports on a screen) artefacts are not significant in themselves. It’s the ideas behind them, the content, that’s important.

    Thanks,
    Albert

    Thank you.

    Reply
  8. Great explanation.

    Michael replies: Thank you.

    I think even programmers should read this article as it’s equally important for them to understand the tester’s concern whenever he calls out for something which can threaten the project/product without actually recording it in any bug management system.

    I’d be happy if programmers read this, yes. I’d be much happier if managers did, since it’s typically managers who are in the best position to provide the authority and the resources to resolve issues. Ideally, a sufficiently empowered team would be able to handle many of their own issues internally, but some issues (issues that require spending money, removing process obstacles, addressing customer contract and obligations, dealing with schedule threats) might need management-level clout to overcome.

    As you have already said and also per my experienc , related people give concern to only those bugs which are logged into tool (HP QA) in our case as higher management and anybody from client side can get to read it and it becomes mandatory for them to answer for it. But on same time if tester raise their concern just over mail or verbal communication, things are never given equal importance.

    Some issues simply take one’s breath away, like these: if it’s not in the bug tracking system, it’s not given the status of a problem worth solving; or if clients or upper management aren’t aware of it, it’s not given the status of a problem worth solving.

    For those who wish to see a whole list of revealing issues—pitch-perfect examples, have a look at Jerry Weinberg’s Testing Without Testing, an excerpt from his remarkable book Perfect Software and Other Illusions About Testing. Note also Jerry’s key points: that for every issue, there is number greater than one representing potential explanations for the issue; and that “people will rationalize away the evidence that’s so apparent to you”. To which I would add that, by the time you make your observation, they’ll have had some practice at the rationalizations. “So,” as Jerry says, “we need to know how people immunize themselves against information, and what we can do about it.”

    Thanks for the nice post.

    You’re welcome.

    Regards,
    Lalitkumar Bhamare

    By the way, Lalit has produced a magazine called “Tea-Time For Testers”. You can find the first issue here.

    Reply
  9. […] This post was mentioned on Twitter by Michael Bolton, ttimewidtesters. ttimewidtesters said: RT @michaelbolton: Want examples of issues as in http://bit.ly/hVrr5v? @jerryweinberg provides a cornucopia in "#Testing Without Testing … […]

    Reply
  10. Thanks for the blog post.

    Michael replies: Thanks for saying thanks. And you’re welcome.

    I thought the examples were great. I have recently implemented some limited session based testing in my testing group and one question I was immediately asked by one tester was what the difference between Bugs and Issues was. My answer was something along the lines of “they’re problems, but not bugs”. Your answer is much more comprehensive, and one that I will forward on to my team members.

    Initially when I reviewed session based testing as described on http://www.satisfice.com/sbtm/ I was skeptical about the need for the issues section. However, after my limited experience with session based testing and having read this blog post, I now think that the Issues section is actually the most important area in the session report. This is because any action that must be taken as a result of a test session will be found in the Issues section. Anything in the bugs section should already be logged and the test notes are just there to refer back to as necessary. Whereas issues require some action by somebody, whether it be further investigation, discussion with another team member or simply a good long think.

    I will certainly be pushing my team to put a greater emphasis on reporting issues in future session testing.

    I’m glad that I seem to have been helpful here. Thanks again, and let us know how it works out.

    Reply
  11. I’m enjoying this conversation on “issues”.

    James Reason in the book “Managing the Risks of Organizational Accidents” is full of examples where ISSUES contribute to catastrophic failures.

    Healthy organizations will robustly, thoroughly, and self-critically investigate the root causes of near misses, incidents, accidents, and catastrophic failures. Unsafe acts, local workplace and organizational factors are often the key sources of these root causes (a.k.a, ISSUES). Appropriately reporting issues is an important part of a safety system.

    These organizations:

    Therac-25 http://courses.cs.vt.edu/cs3604/lib/Therac_25/Therac_1.html
    Chernoboyl http://en.wikipedia.org/wiki/Chernobyl_disaster#Experiment_and_explosion
    Barings Bank Collapse http://en.wikipedia.org/wiki/Barings_Bank
    1994 Fairchild Air Force Base B-52 crash http://en.wikipedia.org/wiki/1994_Fairchild_Air_Force_Base_B-52_crash
    Air France Flight 296 crash http://en.wikipedia.org/wiki/Air_France_Flight_296

    … had ISSUES prior to their public failures. A reluctance to report and investigate ISSUES is a black mark against management.

    For yourself, speculate:
    a. Given your organization’s issues, why did your company catastrophically FAIL a year from now?
    b. What did you hear/see/smell right now that would have warned you of that future failure?
    c. How do you feel about that information in (b)?
    d. Is there anything about the present situation (b) and (c) that anyone would like to change?
    e. Start invoking organizational root cause analysis, and organizational change processes for (d).

    Michael replies: Splendid amplification, Griffin.

    Your capitalization of ISSUES made me realize that it’s an acronym: “I See Something U’ll Eventually See.” 🙂

    Reply
  12. Great post, Michael. Long time lurker, first time commenter.

    Michael replies: Welcome!

    I realize that the point of this discussion is not to focus on how to track issues, but I think that’s the next logical step, and one that will have a different outcome for each organization. For a small shop like mine, I’m thinking of sticking a whiteboard on the *outside* of my cubicle wall to track the issues that come up here. Foot traffic may be just enough exposure for most issues in my case.

    I like the low-tech approaches, too. Conversation, post-it notes, 5×7 index cards, whiteboards… In my home office, I have a flipchart pad hanging off the back of my door. When I have a lot of stuff to deal with, writing a list on the flipchart in Great Big Letters with a Big Stinky Marker emphasizes things much more firmly. It can get covered up on the desk. Plus, striking out a completed task is very satisfying. I agree that one key to dealing with issues is to make issues visible.

    Be careful, though. Somewhere out there is a manager, freshly-minted MBA in hand, developing a new formalized process for tracking issues. I can see him now, “We should implement a database to keep track of these things!” Not to say that such a system couldn’t be appropriate in certain situations, but it seems like a perfect way to ignore problems: “How come this wasn’t charted in my issue tracking system?!”

    Yes. Tracking systems of any kind are media. As Marshall McLuhan pointed out, media extend enhance, accelerate, intensify, and enable human capabilities. Yet media are agnostic as to what they extend. In addition to extending our ability to track issues, media can simultaneously extend our ability to ignore issues, or reverse into the opposite of tracking actual issues (for a breathtaking example, see item two in Jerry Weinberg’s lovely blog post Testing Without Testing).

    Reply
  13. This is some excellent food for thought. I’m trying to come up with a way to handle issues, as well as a way to handle bugs that my team and the programmers agree are out-of-scope for a particular release. Haven’t come up with a solid one yet, but this has given me some rough ideas.

    Reply
  14. I must say you have an excellent blog. Although I think I handle issues as easy as defects and handle them accordingly it is great to see how it transforms the way people think about the feedback they get.

    Reply

Leave a Reply to Hein Holthuizen Cancel reply