DevelopsenseLogo

More Responses to the Agile Testing Questions

A couple of months ago, a correspondent on the Agile Testing list asked a bunch of questions, some of which I answered in an earlier blog post. Here are the answers to the other questions.

4) How do we estimate Test Efforts for agile Testing? Can we use normal estimation models?

I’m going to differentiate here between agile testing and Rapid Testing–the kind of testing defined by James Bach, the kind of testing that I practice and teach. I’d contend that Rapid Testing is agile–in the dictionary sense, and in the sense of the principles declared by the Agile Manifesto–but James is reluctant to slap a capital-A Agile label on it. Rightly so, I think. I’m not sure if there’s broad consensus on what Agile Testing is, and if there is, I’m not going to claim to be the right person to speak about it.

In Rapid Testing, we use Session-Based Test Management (SBTM) both to account for our time and as a basis for estimating how we’re going to allocate the time we’ve got. You can read about SBTM at James Bach’s Web site, http://www.satisfice.com/sbtm. I’ve worked with it, and I like it; and there are increasing numbers of people who report good experiences–Bliss’ presentation at STAR East 2006 being a recent highlight for me.

But with respect to estimation, let’s look at what happens in many organizations.

Management has an idea for a product it wants to ship. The managers might solicit estimates from the marketing people, the developers, and the testers; there might be some haggling about the scope of the project; some resource allocation might be planned, or performed in response to more haggling.

What’s the result of the estimation effort? The result is that management will stipulate that something will ship on such-and-such a date–which, by no small coincidence, is a release date that management had in mind from the start. Then development takes more time than the developers expected, and the testers are asked to complete testing such that the product ships on the date management originally had in mind, or a little later, often with reduced scope from the original charter. Many of us have been there, right?

So Rapid Testers view things a little differently. We know that we don’t set the schedule; we don’t drive the bus, as Cem Kaner says. Our job is to provide information to management, such that management can make informed decisions about the state of the product and the project. Our commitment is to do our best to provide the project community with the fastest, highest-quality information that we can at any time. Are there particular tests that important people want us to perform? We’ll get them done as quickly as we can. Do they want to see how we’ve covered the product? SBTM is set up to provide exactly that information. Do the managers want to feel as though there are no more serious bugs in the product? We’ll focus on finding the serious bugs fast–and remind management that we can promise only a good-faith effort, not perfection.

Until there’s something to test, we’ll research the business system within which the product is expected to work; we’ll model the product, set up test environments, or work with customers and developers on framing requirements, if those things are part of our assigned mission. To the greatest extent possible, we’ll anticipate and identify risk. When we’re performing advance tasks, we’ll focus on things that are unlikely to be undermined by changes to the product as it’s being developed.

As soon as there’s anything to test, we’ll test it by operating it, observing it, and evaluating it. We’ll be prepared to report at any time. When management decides that it has enough information to ship, they ship–and then we typically stop testing. So our estimate is that we’ll provide as much information as we can within the time that management allows for development. This has the benefit of being 100% achievable.

If a “normal” estimation model is to try to figure out, in advance, all the test cases that we’re going to produce, how much effort it will take for someone to write them down, and how much effort it will take someone (else) to run them, how many bugs we expect to find, how long it will take the developers to fix some proportion of those bugs, and how long it will take us to run the resulting regression cycles… no, we tend not to do that. We contend that any attempt to predict all that stuff will be risky, and often inaccurate–except to the extent that the organization, when confronted with some unwelcome truths, will decide to drop the predictions and do whatever testing it can in the time available. Which ends up looking a lot like our model, but with the all that extra overhead associated with trying to make an ultimately unsuccessful prediction.

5) What is the typical % of testing effort in Software development Life cycle by using agile methodology?

I don’t know of a way to separate testing and development in a measurable and meaningful way, and even if I could, the measurements wouldn’t apply in the same ways to all organizations.

Part of the problem is associated with the question of measuring development or testing effort as a scalar. Do you measure effort as the number of weeks applied to the project? As the number of person-hours? As lines of code or test scripts or other documentation? If a developer writes a unit test, do we put that on the testing or development side of the ledger? If a tester writes an automated script to exercise the product in some way, she’s writing software; is that testing or development? When you measure test effort in different organizations, do you account for differences in skill or experience or value, or do you simply count the number of keystrokes ?

Agile processes tend to advocate test-driven development (TDD). TDD tends to increase the speed at which developers get feedback about bugs, and that tends to reduce lots of of coding errors. Agile processes advocate lots of participation from testers and customers (or customer representatives) througout the project; sensible Agilistas propose that we automate tests that lend themselves well to automation. To the extent that these practises are followed skillfully, they would seem to have a high probability of being helpful. Yet there’s nothing to guarantee that the practices are being followed, or that they’re being followed with skill. Moreover, skillful people, working together, have been producing valuable software forever using a huge variety of process models. Lots of successful teams wouldn’t be able to name their model, but release worthwhile software anyway.

I like agile principles. They’re intended to be humanist and pragmatic. That’s a generally a very good thing. However, no principles or processes can survive a context in which they won’t work. I offer no guarantees that agile approaches will solve your process problems, and I don’t think anyone else should either.

6) Is there any difference in testing effort for Normal testing process and Agile testing process?

Again, I don’t know how to answer the question in a way that’s helpful. There are too many dimensions that you might care about.

Rapid testing is designed to be the fastest, most effective testing that fulfills the mission. Whether that might translate to a reduction of effort to provide the same amount of information; or to exactly the same amount of effort but higher quality of information; or getting better coverage using the same resources–I couldn’t tell you without knowing about your context.

7) If the documentation is less in Agile testing, will it not impact the quality? I hope this will overcome by having daily meetings to update the status/issues.

The authors of the Agile Manifesto value working software over comprehensive documentation, and make it explicit that they believe that, while there is value in the latter, there is more value in the former.

Jerry Weinberg defines quality as “value to some person”. There is nothing inherently valuable in documentation if the person in question is only interested in working software. Correspondingly, documentation is not intrinsically without value, as long there’s someone who values it, and as long as that person matters. People err when they view quality as an attribute of a product, rather than a relationship between that product and some person. The question is: is the documentation a product or a tool that serves the development of some other product? If it’s a tool, might other tools fulfill the same function?

Again, speaking for Rapid Testing: we’ve observed lots of documentation that’s “write-only”–it gets written, stored, and never looked at again. This takes time, and that’s time that might be more valuably spent on test execution. So our goal is to produce all the documentation that’s required to fulfill the mission of the product, and no more. We also allocate time to any given piece of documentation in accordance with the audience and purpose that the document is intended to serve. If the documentation is a product, something that we must present as a project deliverable, then we’re more likely to spend time on its presentation. If it’s a tool, something that we use only for ourselves or inside the test team, we’re less likely to spend a lot of time on making it look good. If a sketch serves our purposes, a full-blown oil painting is probably a waste of time and effort.

Daily standups can be great if they’re focused and productive, but we’ve seen lots of daily meetings in which that people go through the motions instead of exchanging useful information. Agile advocates prescribe principles and approaches for making meetings valuable, but can’t guarantee that those principles and approaches are followed. An organization that has the people and the talent to use those principles appropriately will probably do well. Every project has its context and culture for exchanging information, including (but not limited to). documents, meetings, phone calls, hallway conversations, and email. Pragmatic members of the team will use whatever means of communication fit within their skills and their temperaments and the local culture.

I hope these answers help you out.

—Michael B.

3 replies to “More Responses to the Agile Testing Questions”

  1. The customers are going to ask for the test effort. In our organization the development team and the test teams give separate estimates. So what do we do here?

    Michael replies: I would consider this (probably more a question for those who set policy than for you, alas): when you are asked how long it’s going to take you to drive somewhere, do you give separate estimates for steering and looking where you’re going? When you call a taxi company as a customer, do you ask for a breakdown of steering time and looking-where-you’re-going time, or do you see them as activities that should be continuous, in parallel, from start to destination?

    As a tester, I’m going to start testing as soon as someone gives me something to test, and I’m going to stop testing when someone asks me to, or when the product ships. You will perhaps find some helpful suggestions by following this link.

    Reply
  2. Michael, I have a different view from what you have stated above.

    Michael replies: That sort of thing has been known to happen. 🙂

    When I ask you how long you would take to drive from home to office, you may say about an hour. You would not say I do not know. Though it may be possible that there are days where there are traffic jams, days when there is no trffic at all, there may be a break down in the car etc.
    It is possible that you do not reach your home exactly in an hour every time, it may be 1.1 hrs, 1.2 hrs or 0.8 hrs. But we approximate it to one hour.

    So far, so good.

    This is what the customer is looking for, is you testing going to take 2 weeks or 2 months (not considering the development effort) assuming that there would be two rounds of testing. This is the assumption that we provide.

    And I am going to suggest that is not your assumption to provide.

    There are services that have a set completion point. Were you the customer and I a taxi driver, it would be perfectly legitimate for you to ask me, “How long will it take for me to get home?” and it would be perfectly legitimate for me to respond “About an hour.” If you asked for something more specific, it would be legitimate for me to answer that my answer was pretty reliably accurate to plus-or-minus fifteen minutes, based on the traffic, the time of day, weather, road conditions, and so forth. No problem there. There’s a starting point, and a finite destination. One can describe from the outset most of what it means to deliver the completed service to the customer in this extremely specific and binary way: either you’re at home, or you’re not. But testing is not that kind of service.

    Testing, real testing, is an open-ended task. It’s not an activity with a specific, predictable destination. It’s a process of exploration, discovery, investigation, and learning. This doesn’t have a set completion point, and it’s not subject to piecework. We can’t say that when we’ve completed these N test cases, we’ll have tested the product (unless we’ve decided that we want to express testing solely in terms of checking). We cannot schedule the discovery of important problems; we can’t decide in advance how much time we’re going to spend investigating them; and we cannot predict when we’ll be satisfied with our investigation. Since the programmers haven’t provided us with a list of bugs, we can’t schedule when we’ll find them, the programmers can’t schedule when they’ll fix them, and we can’t schedule when we’ll re-test them. Nor can we decide in advance how many of the problems won’t be fixed and will need additional work from the programmers.

    What we can do, of course, is provide this promise: “We will continue to investigate the product, Dear Client, until you ask us to stop. And at every step along the way, we will to report what we’ve discovered so far, what we think you’d like us to look into that we have not yet looked into, and into risks that we can anticipate. And at every step of the way, we’re willing—nay, obliged—to make sure that you’re satisified with what we’ve looked into and what we plan to look into, and if there’s any problem with that, then we’ll work with you to make sure that we’re providing you with the kind of information you seek.”

    When we provide a estimate, the customer gets an idea as to how long it would take.

    How long it would take to do what, specifically?

    If the test team does not provide the testing estimate and this responsibility is given to the development team, the decision is taken away from us, and I would not like that as I would be forced to test in the timelines that the development team provides.

    The decision as to how long you have to test is never yours in the first place, so it can’t be taken away from you. I’d recommend that you read this post.

    Thanks
    Sam

    Thank you for writing.

    Reply
  3. You are suggesting it may not make sense for testers to give time-based estimates to their teams, but what about relative estimates? Let’s say a Rapid Software Tester is asked to participate in Planning Poker (relative-based story estimation) on an Agile Scrum team. I’ve always considered this a golden opportunity. Are you suggesting said tester may want to refuse to participate in the Planning Poker?

    Michael replies: This question was interesting enough to me that I did a blog post on it. Thanks!

    Reply

Leave a Reply to Samuel Cancel reply