I have this conversation rather often. A test manager asks, “We’ve got a development project coming up that is expected to take six months. How do I provide an estimate for how long it will take to test it?” My answer would be “Six months.”
Testing begins as soon as someone has an idea for a new product, service, or feature, and testing ends, for the most part when someone decides to release or deploy it. Testing happens at the same time as development does. Testing starts when development starts, and when development is done, testing ends.
In reply to that, the test manager sometimes says, “That wouldn’t be practical.”
That answer used to confuse me—it seems pretty impractical to develop something without exploring it, experimenting with it, and checking it pretty much continuously. But I now believe that “that” refers not to testing, but to the test manager’s fear of having a conversation with a development manager—one with a factory-oriented model of software development and testing—who is asking for the estimate.
So, some time ago, I wrote this post, to help people to work through the problem and to offer some solutions for it. The core message is that thinking of a project in terms of “a coding phase and then a testing phase” is like thinking of programming in terms of a “typing phase and then a thinking phase”. If reframing a misbegotten model of development is impractical, to me it seems vastly more impractical to live with the consequences of that model.
I think there’s also an economic factor with this. Development is seen as the important activity and testing as the required phase before releasing. By stating than testing should last as much as development would required twice the budget for organizations that have to log hours against a project or at least a larger budget than desired.
Michael replies: This is, to me, a very odd way to think. I don’t know of any sensible person who would suggest that tasting, looking, smelling, listening, touching, observing, and evaluating would double the time or effort required for cooking a meal. That’s because paying attention is part of cooking; paying attention starts when cooking starts, and finishes when cooking ends. Testing is the paying-attention part of development. Indeed, a failure to pay attention seems to me like a great way to double (triple, quadruple) the time it takes to get a meal on the table. (Or maybe the unrecoverable disaster due to inattention will trigger Plan B, whereupon everyone will wait for the pizza to arrive.)
We have to fight a lot of misconceptions about our craft while performing at top level. At the end of the day we just want to make the end product better. I don’t think we’re asking for much. Just a fair chance to do what we do best.
To me, fairness doesn’t have much to do with it. This is business; economics, in the sense of the relationship between cost, value, and risk. I’m providing a service to my client, but I’m not a slave. If my client doesn’t get value from the services I offer, he has the right to end our relationship; if I can’t deal with the constraints that the client is asking me to work under, I have the right to end the relationship. Long before it comes to that, though, we both have ongoing opportunities to discuss and negotiate the terms and conditions under which we agree to work together; the constraints to be managed; the resources that we might apply; and the costs, value, and risks associated with what we’re doing. Agreement is crucial, if we’re not agreeing, I’m not sure I’d appeal to fairness. I’d work on the basis of identifying value (or cost reduction) from what the client is prepared to do or pay to have his requirements met.
Michael –
The helplessness of Test manager to say “six months” and have his way with stakeholder or project manager is due to practice that there is this thing called “Testing” cost and that is supposed to 1/3 or some number like this.
Michael replies: Don’t debase the word “practice”. A failure to think and to reason is not a practice.
This so called “best practice” requires testers to estimate in ADVANCE and tell the PM that it would cost (hence time) so much.
When Test manager says 6 months – he/she will be ridiculed and PM will with 1/3 cost metric.
This is reality
This is no more reality that programming a machine to take a number as input and divide it by two, presenting the output as your estimate, and claiming that you’re doing responsible work.
I despair at people claiming that this is “the real world” when they’re talking about the fantasy world—the one where things like engineering and thinking and responsibility don’t exist. I’ve written about that, too. I know you well enough that I believe you can figure out how to talk to these people—or to walk away and offer your services to people who would value them.
Exact thoughts captured. Very important conversation being offered as solution. If you don’t type and think at different times while programming, why would you want to keep coding and testing that ways.
[…] Blog: Very Short Blog Posts (22): “That wouldn’t be practical” – Michael Bolton – http://www.developsense.com/blog/2015/01/very-short-blog-posts-22-that-wouldnt-be-practical/ […]
[…] That wouldn’t be practical Written by: Michael Bolton […]
Michael,
I’m well aware of such conversations. As I haven’t mastered the way of taking them from practical impracticality to healthy rationality, I’ve got a couple of tricks that generally worked well. I’m offering these heuristics to your readers and for your judgment.
1. Appeal on their channel. If they talk numbers, talk numbers. If they perceive through emotions, use emotions.
– If they talk about cost of testing involved earlier, talk of cost of defects fixed later vs. fixed earlier.
– If they talk of “best practice” suggestions, point out to the disclaimers of this “best practice” and how they apply to your particular context. Or point out to another “best practice” contradicting the first one.
– If they refer to “success stories” (especially agile success bedtime stories), point out to “success conditions” suggested by the same authors of the stories, and ask if those can be met.
2. Use backward walkthrough.
What is the release date? How long is deployment phase? Do you expect all important problems to be fixed prior to start of deployment? – How much time you reserve for that? Do you expect all bugfixes to be retested? – Let’s throw some reasonable time box here. Do you expect the product to get a round of testing after all bugfixes and retesting? – Let’s reserve some time. And let’s mark these dates and timeframes on the calendar.
Now, how do we get to know about those bugs in the first place – let’s put a timeframe for testing and a timebox for bug investigation and logging, based on the guesstimated number of bugs. And I’ll tell you what can be reasonably included in that testing timeframe.
So, is this the earliest when you want to know about the bugs and their impact on the schedule? If no, let’s keep moving backwards, and putting on testing-fixing cycles.
“Now, dear Project Manager, we can convert those guesstimates into costs, and into prevented expenses and addressed risks, and if the costs seem too high for you, let me know when and what testing you’re willing to give up based on your comfort level with the risks. And I’m here to help you to have as many rounds of partitioning and risk assessments as we need to work out good enough estimates and plan.”
Thanks,
Albert
This is both true at the first approximation and also simplistic. Yes, QA should be involved from front to back of a project, but often that question really mans “How many people at what points need allocated to test this software to release standard?”. This problem is actually magnified if you have testers on the project front to back.
It might mean that. But there is no such thing as “testing software to a release standard”. You’re thinking, perhaps, of developing software to a release standard (whatever that might be in a particular context).
I think the question is more like “How many people (how much time) do we need to feel good about the amount of testing that we’re doing and the testing coverage we’re obtaining?” “How many people (and how much time) do we need to close the risk gap—the distance between what we know already and what we need to know about the product?” And I don’t see how that problem is “magnified if you have testers on the project front to back”. It seems to me that figuring that out can start at any time—probably the earlier the better—and it’s subject to change at any time. “How many testers do we need?” is an empty question without asking “to do what?”—and what needs to get done is a matter of learning, not just calculation.
That is a a knottier problem. Sometimes developers can spend a very long time to produce something that actually has a fairly concrete outcome and takes a short time to test, but more likely they’ve added another configuration option that ripples through half the system and leaves difficult choices about coverage and risk. Development and testing can happen in parallel in a macro sense but not necessarily in a micro one – developers can’t fix unfound bugs.
All the more reason to suggest that its unwise not to be thinking about testing and performing it from the outset. Note, by the way, that this doesn’t mean that testing has to be done by someone called “tester”, even though I think that would be a pretty good idea.
Project management almost always never wants to pay the people or time cost to do it right, which is where the difficulty comes in and your test manager is probably been abused enough times to know which fights are winnable and worth having. In general, this stuff is hard and it is why you’ll get negative responses about realism from those at the coalface.
I’m intrigued by what you mean by “doing it right”. “Right” is a pretty subjective notion, don’t you think? Are you considering “right” in terms of value, or in terms of cost, or in terms of risk? Are you sure you’re evaluating those things on what the business could reasonably perceive? (In other words, who is to say that your version of “right”—or mine—ought to prevail?
I suspect a good part of the problem is that management is frequently and justifiably unsure about the value that testing can provide because there’s a lot of really bad testing work being done out there. In other words, this isn’t simply a matter of misguided managers; there’s a systemic problem.
Albert’s approach is the only sane one.
Only, eh? You seem awfully sure about that. In any case, we’ll be doing more on estimation soon. Stay tuned.