Test Project Estimation, The Rapid Way

Erik Petersen (with whom I’ve shared one of the more memorable meals in my life) says, in the Software Testing Yahoo! group,

I know when I train testers, nearly all of them complain about not enough time to test, or things being hard to test. The lack of time is typically being forced into a completely unrealistic time frame to test against.

I used to have that problem. I don’t have that problem any more, because I’ve reframed it (thanks to Cem Kaner, Jerry Weinberg, and particularly James Bach for helping me to get this). It’s not my lack of time, because the time I’ve got is a given. Here’s a little sketch for you.

I’m sitting in my office. Someone, a Pointy-haired Boss (Ph.B.), barges in and says…

Ph.B.: “We’re releasing on March 15th. How long do you need to test this product?”

Me: (pause) Um… Let’s see. June 22.

Ph.B.: WHAT?! That can’t be!

Me: You had some other date in mind?

Ph.B.: Well, something a little earlier than that.

Me: Okay… How about February 19?

Ph.B.: WHAT!?! We want to release it on March 15th! Are you just going to sit on your hands for four weeks?

Me: Oh. So… how about I test until about, say, March 14.

Ph.B.: Well that’s… better…

Me: (pause) …but I won’t tell you that it’s ready to ship.

Ph.B.: How do you know already that it won’t be ready to ship?

Me: I don’t know that. That’s not what I mean; I’m sorry, I didn’t make myself clear. I mean that I won’t tell you whether it’s ready to ship.

Ph.B.: What? You won’t? Why not?!

Me: It’s not my decision to ship or not to ship. The product has to be good enough for you, not for me. I don’t have the business knowledge you have. I don’t know if the stock price depends on quarterly results, and I definitely don’t know if there are bonuses tied to this release. There are bunches of factors that determine the business decision. I can’t tell you about most of those. But I can tell you things that I think are important about the product. In particular, I can tell you about important problems.

Ph.B.: But when will you know when I can ship?

Me: Only you can know that. I can’t make your decision, but I can give you information that helps you to make it. Every day, I’ll learn more and more about the product and our understanding of it, and I’ll pass that on to you. I’ll focus on finding important problems quickly. If you want to know something specific about the product, I’ll run tests to find it out, and I’ll tell you about what I find. Any time you want to ask me to report my status, I’ll do that. If at any time you decide to change the ship date, I’ll abide by that; you can release before or after or on the 15th—whenever you decide that you don’t have any more important questions about the product, and that you’re happy with the answers you’ve got.

Ph.B.: So when will you have run all the tests?

Me: All the tests that I can think of? I can always think of more questions that I could ask and answer about the product—and I’ll let you know what those are. At some point, you’ll decide that you don’t need those questions answered—the questions or answers aren’t interesting enough to prevent you from releasing the product. So I’ll keep testing until I’m done.

Ph.B.: When will you be done?

Me: You’re my client; I’ll test as long as you want me to. I’ll be done when you ask me to stop testing—or when you ship.

Rapid testers are a service to the project, not an obstacle. We keep providing service until the client is satisfied. That means, for me, that there’s never “not enough time to test”; any amount of time is enough for me. The question isn’t whether the tester has enough time; the question is whether the client has enough information—and the client gets to decide that.

9 replies to “Test Project Estimation, The Rapid Way”

  1. Bingo! Thanks for the great example discussion! That’s exactly what I believe us testers should be doing: providing information to those that make the decisions.

    The trick is to get that through to those buried in a process that makes “QA” (really testers) the gatekeeper, the bottleneck, and the scapegoat.

  2. Thanks for presenting a new and novel approach to handling such open ended questions like “When Testing will be completed”. In my opinion, the challenge driving this approach and emphasizing it to management – lies in effectively articulating thoughts like “I will test as long as you wish” and “Will not make the mistake of deciding when to ship”

    With due respect all *managers* out there – most of the time, managers want a number – such a number that suits them (this you have very well
    articulated) and expect the tester to give that *number*. Few managers conveniently shift the responsibility of making decisions about the ship date to the testers.

    There are two extreme variations of this “shifting” behavior

    In one case – No release until QA (aka testers in some communities) says it is ready.

    Other one is – Even when the tester says there are issues that *matter* they ship the product (while blaming testing for delaying the ship date
    in some way).

    Our industry really needs a paradigm shift in pushing Testing as service (open ended quest for problems) and constantly staying away from assuming the role of the Project manager

  3. You did a good job representing the pointy haired manager.

    I worked on a high quality team that valued testing. Testing had a few metrics to report. The first is how much of the written and unwritten requirements had specific test cases. The hard goal was 100%. Testers developed test case while requirements were written. The next metric was % of test cases that were automated. The goal was 100%. Unfortunately, this was not a hard goal. When these goals are met, we can answer the question.

    Testing will be done when all test cases are written, automated (as much as possible), and successfully executed. Testing was not done to full a schedule. It was done to verify all the written and unwritten requirements.

    Re-releases are simpler. If testing found the defect, only execution is necessary.

    The quality of development determines how many releases are necessary to successfully execute all test cases.

    But, pointy haired managers can ignore recommendations of high quality teams and ship with out regard to developers and testers.

  4. I have used the magic number approach referenced above by Shrini Kulkarni, because I am still somewhat naive in the art of test management, and because my project managers seem to have a strong need for this kind of tracking mechanism. I did, however, reserve the right to change the magic number as the test team learned about the system.

    Our magic number was simply a count of the test cases that had been agreed upon by the testers and other project stakeholders. I had toyed with the idea of using a more abstract number, “testing points”, to accomodate some of our testing that was not easily represented as scripts, but found the stakeholders could not get their heads around the concept.

    In such cases, I’d suggest something less abstract but that is not a number. Consider session-based test management. Divide the testing effort (which is, in most organizations, typically set as a period of time to begin with) into a number of sessions, guided by charters of one to three sentences. Each charter expresses a mission for a time-boxed period of testing. The focus of this kind of testing is on time allocated for discovery, rather than on some number of tests. That’s important, because you can’t schedule discovery, but you can allocate time for it. (Note that investigating and reporting bugs will impact on your ability to obtain the desired coverage, so SBTM contains protocols for adjusting; see the link above.

    I found that the cumulative number of tests passing over time was a fairly linear function, and that extrapolating that line to its intersection with the magic number was better than the project schedule as a predictor of ship date. The project managers could have shipped earlier if they wanted, but they were bought in that these tests must be passing before they deployed the system.

    I wish that people more people would buy into this concept: passing tests, and in particular the number of passing tests, are not particularly interesting, especially not from a testing standpoint. When a test passes, it means that the program is capable of answering a specific question that we ask it in a particular set of circumstances. That’s like training a student to pass a multiple choice test for which you already know the questions and answers. Yes: it is likely very important that the tests pass before deployment. Yet when those tests pass, whether the program is ready for deployment is an open question. Consider that those tests could pass, and the program could still have terrible problems with it. Failing tests of this kind tell you that you can’t ship. Passing tests tell you that you could ship, but it’s wise to do that if and only if the sole criterion for shipping is that those specific tests pass.

    As Jerry Weinberg pointed out (in 1961!), the sheer number of tests passing is of little interest in itself. We get fooled, he says, by confusing repeatability with reliability, because of our experience with people. When they can give the same answer to the same question over and over again, we presume that people are reliable, mostly because we’re used to seeing them as variable. But we’re dealing with computers and software, for whom repeatability is easy. Repeated results demonstrate the computer’s prowess at doing the same thing at different times, and when we think we’re varying the tests, often we demonstrate the computer’s prowess at doing the same thing with different numbers. But our purpose, Jerry goes on to say, is to prove not repeatability but adaptability.

    The magic number has allowed me to respond the question “When will testing be done?” with data, and at least the appearance of some intelligence. Now I am wondering if I can de-emphasize the magic number without losing the benefits I have gained from it. Or is this benefit merely fool’s gold?

    I think that’s a really important question, and I’d consider it very carefully. If you’ve obtained managers’ consent to do some actual testing, you have achieved some benefit, to be sure, and it would be bad to lose that. But I’d agree that you need to emphasize something different. Here’s my suggestion: it’s not your call, because it’s a business decision, not a technical one. The interesting issue for managers is not when testing is done, but when development is done. Development is done when managers and programmers believe that there are no more problems in the product that are for the business’ purpose important enough to fix.

    As a tester, I keep a list of three things: 1) Knowns—important problems in the system that management might choose to have fixed; 2) Known unknowns—areas of the system that we know something about, but that we could know more about; and 3) Unknown unknowns—areas of the system where our knowledge is so limited that we’re not even sure of what there is to know about it. As a tester, my goal is to keep management apprised of what’s in each state, and to move bits of knowledge up by at least one number. As noted in the blog post above, management always has an idea of when they’d like to ship. If I keep management apprised of where we’re at, the knowledge and direction about whether we’re done and how we’re going to get there becomes collaborative, and the decisions on what to do (properly) live with management.


Leave a Comment