In preparation for our automation and management classes, James Bach and I are currently analysing ROI calculators (“Here’s what you can save over N years using our product!”) and we’re going deep on reverse-engineering one from a prominent tool vendor.
Opaque formulas; undefined and unexplained terminology; unnamed and inexplicable constants; weird, inexplicable coefficients in the calculation; poor testability; hilarious bugs sitting right there on the surface; impossible conclusions due to all that… this tool has it all. We’ll be featuring it in upcoming classes on management and on automation in testing, and we’ll be looking at others like it. Since we’d like to reserve the one we’re examing for exercises in our classes, here’s another. What on earth are you thinking, Katalon?
One of the bigger problems with tools like this is construct valdity. In a nutshell, construct validity is about whether the thing that you’re observing or measuring is an instance of this or not an instance of this — or as it’s sometimes put, the ability to count to one. Without that, you’re comparing apples to orange Volkswagens, and your “calculations” and “measurements” are bullshit—talking about things with reckless disregard for the truth.
Tools can help us to extend our coverage, accelerate our collection of data, intensify our focus, refine our observations. Indeed. we’ve been using tools and code in our analysis of the calculators to afford faster, deeper, and more thorough analysis.
From the perspectives of measurement theory and accounting, though, it would be ridiculous to put a dollar figure on “how much money we saved” by using tools, like talking about how much effort the carpenter saved by using a hammer instead of pounding nails with his forehead. It would be equally ridiculous to divide our work into “automated research” and “manual research”, as people talk about “manual testing” and “automated testing”.
It’s remarkable that the vendors seem to think that the testing community, their managers, and their executives are suckers. The fact that these ROI calculators aren’t being laughed off of the vendors’ Web sites affords them good reason to believe it.
What is even more remarkable, and sad, is that no one ever calls the vendors out on the bullshit. We can do that, and we need to be able to do that if we want a community worthy of calling itself “engineering”.
Further reading:
Calling Bullshit (Bergstrom and West)
Beyond Measure (James Vincent)
Reliability and Validity in Qualitative Research (Kirk and Miller)
Software Engineering Metrics: What Do They Measure and How Do We Know? (Kaner and Bond)
I know this is not a new insight, but for me its always important to frame conversations about automation, not in terms of money saved, but in terms of risks reduced, and how testing is going to be enabled.
When pressed for a “number” I have so far (mostly) been lucky enough to work with organisations which will at least listen to me when I say “I am not going to give you one, and here is why”.
In my experience any savings tend to be well down the track. Most teams I work with have made the decision to just do more testing – rather than deliver dollar savings.