A discussion started recently in comp.software.testing about industry best practice:
When creating a complicated web based application from scratch, how many testers per developer would be considered a best practice? I have heard 1.5 testers for every developer. What are your thoughts on this?
My (lightly-edited for this blog) response was…
- If you want to find all the bugs, 100 testers per programmer would be much better than 1.5 testers per programmer. You customers might consider this impressive (if they were willing to pay), but your CFO would freak out. And there’s still no guarantee that you’ll find all the bugs.
- If you want to keep costs low, 0 testers would be much better than 1.5 testers per programmer. It might even work for you, but do you have the sufficient confidence that important questions about your product have been asked and answered?
- If you want to keep costs really low, 0 testers per 0 programmers would be better yet.
We haven’t yet talked about the skills of the testers or the programmers involved. We haven’t talked about the business domain and the attendant levels of risk. We haven’t talked about whether you account for test managers and admins as testers, nor whether your programmers should be counted as testers while they’re testing. (We also haven’t talked about the ugliness that would ensue if you took this 1.5 testers per programmer “best practice” literally for a team of three programmers—which half of the fifth tester would you want to keep? Hint: pick the end with the head.)
One of my points is that this is an unanswerable question without more context information—experience with the company and the development team in the business and technical domains? budget? schedule? co-location of programmers and testers? the mission of testing? Another point is that, irrespective of the answers to the questions above, “best practice” is a meaningless marketing term, except for the meaning “something that our very large and expensive consulting company is promoting because it worked for us (or we heard it worked for someone) zero or more times”. “Industry best practices” is even worse. What industry? If you’re developing Web-based billing systems for a medical imaging company, are you in the “Web” industry, the “software” industry, the “medical services” industry, or the “financial industry”? I wrote about “best practices” for Better Software Magazine; you can find the article archived here (at http://www.developsense.com/articles/2004-09-ComparativelySpeaking.pdf).
Skilled testers help to mitigate the risk of not knowing something about the system that we would prefer to know. So: instead of looking at the number of testers as a function of the number of programmers, try asking “What do we want to know that we might not find out otherwise? What tasks might be involved in finding that stuff out? Who would we like to assign to those tasks?” And if you’re still stuck, get a tester (just one skilled tester) to help you to ask and answer those questions.
One reply on the thread went like this:
…in the end what counts is the defect rate. If the shop is using testing to improve reliability and the released defect rate is unacceptable, you need more testers. If the shop is using testing to monitor the development process and the testing defects found in your fault categories are too small to be statistically significant, then you need more testers.
That might be reasonable so far as it goes. However, here’s something that the context-driven crowd does to sharpen our mental chops. We take statements like these, and apply to it the Rule of (at Least) Three (which comes from Jerry Weinberg‘s writing and consulting). “If you can’t think of at least three alternatives, you probably haven’t thought about it enough.” When we want a moderate workout, we take (at Least) Three to (at Least) Ten.
So above we see one possible approach to reducing the release defect rate. Can we think of at least nine others?
- You don’t need more testers; you need better-skilled testers–fewer, even–who are capable of identifying broader coverage and more efficient oracles.
- You don’t need more testers; you need product managers who aren’t so quick to downplay the significance of bugs and defer them.
- You don’t need more testers; you need better programmers.
- You don’t need more testers; and you don’t need better programmers; you need to use a test-driven development strategy.
- You don’t need more testers; you need to stop wasting time on writing scripts, express risks and test ideas more concisely, and trust your testers to perform their own tests that address the risks (and document them on the fly).
- You don’t need more testers; you need your testers to spend less time in meetings.
- You don’t need more testers; you need closer relationships between tester, programmer, customer, and project management.
- You don’t need more testers; you need your program to be more testable, with scriptable interfaces, logging, and on-the-fly configurability.
- You don’t need more testers; you need more frequent development and test cycles to reduce the lag time between coding, finding, and fixing bugs.
And here’s a few more for free. Some of these might be good ideas, and some might be pathological, but they’re approaches to fixing “defect escape” that I’ve seen before.
- You don’t need more testers; you need to make it more difficult for your customers to report defects, so as to avoid embarrassment to those responsible.
- You don’t need more testers; you need to get out of the software development business altogether.
- You don’t need more testers; you need to remove system features that were rushed into development.
- You don’t need more testers; you need to run on fewer platforms.
- You don’t need more testers; you need to start testing at a lower level of the product.
- You don’t need more testers; you need to reduce your emphasis on developing automated activities that don’t really test the product.
- You don’t need more testers; you need to increase your emphasis on developing automation that can perform high-volume, randomized, long-sequence performance and stress tests.
The defect escape ratio is just a number. No number tells you what you need. Numbers might provoke questions, but the world is a complicated place. If you haven’t thought of at least three (or ten, or seventeen) possibilities, there might some important possibilities that you’ve missed.
When creating a complicated web based application from scratch, how many testers per developer would be considered a best practice? I have heard 1.5 testers for every developer. What are your thoughts on this?
It depends.
It depends on how many bugs the developers put in the software. 🙂
It depends on the skill of the testers.
It depends on the criticality and risks of failure.
It depends on the budget.
It depends on the schedule.
It depends on …
I could go on and on. The bottom line is that it depends on the context. This is the kind of judgement calls that we testers and managers have to make all the time. There is no single formula that we can apply to estimating required resources or time.
One of my points is that this is an unanswerable question without more context information
I am amazed at how these questions abound. Blogs and discussion forums seem to be full of questions without context for which there is no single answer. I’d like to have an encyclopedia filled with formulas I could apply to answer such questions. It just doesn’t work that way.
Even more astonishing than the questions is that these questions get answers. I often wonder if the current state of tester cerfitication holds up the notion that there are single right answers to some of these questions.
We can’t answer qualitative questions with numbers alone. Those of us with “Quality Assurance” in our job description should know that. (Although, I argue that no testing can assure quality.) Numbers in context can be applied as useful heuristics — provided that we consider other possible solutions to the problem.
Michael,
What would you say for comments like “we would like to keep our QA costs around 30% of overall costs”
“Testing is expensive and we are constantly becomming innovative and searching for ways to reduce the testing cost”
“Testing should not cost more than 40% of development cost”
All these are variations of dev-test ratio. Fundamental theme here seems to be “testing is an overhead – so it should cost less”
Another viewpoint here is that testing follows development so it occurs when developers are done. So if development takes about 70% of overall project – remaining is for the test ….
Shrini
Michael,
Here are translation of this article to russian: http://goblingame.blogspot.com/2011/09/blog-post.html
Correction to “I wrote about “best practices” for Better Software Magazine; you can find the article archived here” http://www.developsense.com/articles/2004-09-ComparativelySpeaking.pdf
Thank you for all the material!
Michael replies: Thanks for the bug report. Fixed.