“Quality is value to some person.” —Jerry Weinberg
In the agile-testing mailing list, Steven Gordon says “The reality is that meeting the actual needs of the market beats quality (which is why we observe so many low quality systems surviving in the wild). Get over it. Just focus on how to attain the most quality we can while still delivering fast enough to discover, evolve and implement the right requirements faster than our competitors.” Ron Jeffries disagrees strongly, and responds in this blog post.
I think Steven is incorrect, because meeting the needs of the market doesn’t “beat quality”. A product that doesn’t meet the needs of the market (or at least of its own customers) is by definition not a quality product. Steven errs, in my view, by suggesting that “low quality systems” survive. Systems survive when, as buggy as they might be, they supply some value to someone. Otherwise, those systems would die. What Steven means, I think, is that these products fail in some dimensions of quality, for some people, while supplying at least some value for some other people.
Yet I think Ron is incorrect too when he claims that there must not be a trade-off between speed and quality, for the same reason; speed is also a dimension of quality, value to some person. So if I’ve offended either Steven or Ron, I hasten to point out that I’m probably offending the other equally.
I think it’s a mistake to suggest that quality is merely the absence of bugs, as both appear to suggest. Here’s why:
- “Some person” is a variable; there are many “some persons” in every project community. The client and the end user are examples of “some person”, but so are the programmers, the managers, the testers, the documenters, the support people, etc., etc.
- Value to a given person is multivariate (that is, for each person there is a collection of variables, several things that that person might value in varying degrees).
- Capability and functionality are important dimensions of value to some person(s).
- Rapid iteration and time to delivery are dimensions of value to some person(s).
- Security, reliability, usability, scalability, performance, compatibility, maintainability and many other -ilities are also dimensions of quality, some of which may be of paramount importance and some of which are of lower importance to some person(s).
- The absence of bugs is one (and only one) dimension of value to some person(s), if it’s even that. It’s more accurate, I contend, to think of bugs as things that threaten or limit value. For this reason…
- We get severely mixed up when we describe “quality” solely in terms of the absence of bugs.
- The absence of bugs might matter less, much less, than the presence of other things that are valuable. Despite the protestations of some “quality assurance” people who are neither managers nor business people, it might not be insane to value features over fixes. Questionable, I would argue, but among other things, wouldn’t the judgment depend on the severity of the problem and the risk of the fix?
- The absence of bugs is completely irrelevant if the software doesn’t provide value to some person(s). A bug-free program that nobody cares about is a lousy program.
These differing views of value mean that there will be differing views on the notion of working software (also known as valuable software). How do we handle these different views when they compete? By responding to change with customer collaboration which happens by interactions between individuals. That’s what “agile” is supposed to mean. It’s not just about shorter morning meetings with no chairs; it’s about human approaches to solving human problems all day long. Well… it used to be, maybe, for a while. Maybe not any more.
So I disagree with Ron when he suggests that there isn’t a trade-off between time to delivery and quality. That’s because time to delivery isn’t distinct from quality either; it’s another dimension of quality. And there’s always a trade-off between all of the dimensions of quality, depending on what people value, who and what informs the ultimate decisions, and who has the power to make those decisions.
I do agree strongly with Ron, though, when he suggests that the presence of problems in the code is a serious threat to the many of the other dimensions of quality, and that reducing those problems as early as possible tends to be a good investment of time. In his blog post, he has articulated numerous ways in which those problems threaten the ability of the programmers to do valuable work. Test-driven development and unit tests can be powerful ways of avoiding these problems. Collaboration and technical review—pair programming, walkthroughs, inspections, knowledge crunching sessions (as Eric Evans calls them in Domain Driven Design)—not only help to prevent problems, but also afford opportunities for people to learn and exchange knowledge about the product.
I’m not in the business of telling programmers how to do their work, but as a tester I can say that problems in the code threaten the quality of our work too. Specifically, they constrain the testers’ ability to provide value in the form of knowledge about the product. Testers are often asked, “Why didn’t you find that bug?” One plausible answer is “because we were so busy finding other bugs.” Well-programmed code, already tested to some degree by the programmers themselves, is enormously important for the testers. Bugs block our ability to observe certain parts of the program, they add uncertainty and noise to our observations of the system, and they cause us to spend time in investigation and reporting of the problems. This represents opportunity cost; bug investigation and reporting compromises our capacity to investigate and cover the rest of the test space, which in turn gives bugs more time and more places in which to hide out.
So a well-tested program is easier to explore more quickly. Having a hard time persuading your manager or your (cough, cough) Scrum team? This presentation on finding bugs vs. coverage sets out the problem from the point of view of the testers. As usual, the business decisions are for the those who manage the project. It’s up to us—the programmers, the testers, and the other developers on the project—to present the technical risks in the context of the business risks. It’s up to all of us to collaborate on balancing them.