Every now and then, in some forum or another, someone says something like “75% of the testing done on an Agile project is done by automation”.
Whatever else might be wrong with that statement, it’s a very strange way to describe a complex, cognitive process of learning about a product through experimentation, and seeking to find problems that threaten the value of the product, the project, or the business. Perhaps the percentage comes from quantifying testing by counting test cases, but that’s at least as feeble as quantifying programming by counting lines of code; more so, probably, as James Bach and Aaron Hodder point out in “Test Cases Are Not Testing: Toward a Culture of Test Performance”.
But let me put this in an even simpler way: If someone said “management in an Agile project is 40% manual and 60% automated” (because managers spend 60% of their time in front of their computers), most of us would consider that as reflecting a very peculiar model of what it means to manage a project. If some said that programming in an Agile project is “30% manual and 70% automated” (because most of the work of programming, that business of translating human instructions into machine language, is done by the compiler), we’d shake our heads over that person’s confusion about what it means to do programming.
Why don’t people have the same reaction when it comes to testing?
This is a good observation. But it doesn’t really elucidate the real problem, which is people who mistakenly believe that manual testing is just testing that is yet to be automated.
Such people would probably see the percentage of automated testing as a measurement of progress on their misinformed journey to eradicate manual testing.
Michael replies: I’d hope that my readers would be able to infer the real problem from the post, from the links within it, and from the rest of the material on this site. But if they weren’t able to do that, you’ve made it explicit.
I agree. I have heard some management driven goals of 100% automation code coverage. I scratch my head and say, “100% of what?” Even using SONAR on Jenkins, you have to allow a certain amount of “rules” to dictate your code coverage %’tage. It’s an abstract notion that drives teams mad and causes more headaches than getting actual work done. I’ve defined it at a very granular level and found more acceptance and success of: “70-75% automation coverage of applicable conditions, derived from requirements, with a maintenance burden not to exceed the time spent performing in sprint work and exploratory testing time.” It’s a mouthful, but it’s more realistic. I took your advice, and discovered for myself the “green madness” of FitNesse and other tools like Cucumber. I’ve warned others and watched as their goals explode. Your advice is valid and paramount to a highly tested software product.
Michael replies: If you’re scratching your head and saying “100% of what?”, I’m unclear on how “70-75% automation coverage” is any more realistic.
It seems to me that percentages for coverage—accounting for coverage on a ratio scale—don’t make much sense. Except for very narrow models, we can’t describe coverage by valid and reliable quantitative means.
http://developsense.com/articles/2008-09-GotYouCovered.pdf; http://developsense.com/articles/2008-10-CoverOrDiscover.pdf; http://developsense.com/articles/2008-11-AMapByAnyOtherName.pdf
In Rapid Testing, we talk about coverage in qualitative terms, using an ordinal scale. 0 means “we know nothing about this area”; 1 means “we’ve done some smoke and sanity testing”; 2 means “we’ve covered the common, the core, the critical, the essential aspects of our model”; and 3 means “we’ve covered this model really thoroughly; if there were a serious problem we’d probably know about it by now.”
[…] Blog: Very Short Blog Posts (19): Testing By Percentages – Michael Bolton – http://www.developsense.com/blog/2014/05/very-short-blog-posts-19-testing-by-percentages/ […]
“If someone said “management in an Agile project is 40% manual and 60% automated” (because managers spend 60% of their time in front of their computers), most of us would consider that as reflecting a very peculiar model of what it means to manage a project.”
I’ve always been interested in the concept of applying automation to management. For example, could a CEO be replaced by automation?
I find that people’s reactions to automation can be quite telling about how much they’ve really considered what their role or job on a team entails.
If I look at the question “why don’t we have the same reaction when it comes to testing”, then the phrase that popped in to my mind was “reductive view”
Many testers are happy with a reductive view of what they do, because it limits their responsibility and accountability.
Michael replies: I wish them well in their future endeavours.
I think what Jaison is implying is that he has 70-75% of “applicable conditions” automated which I would read as an “agreed list of things” that are chosen to be automated. Let’s say there are 20 (subject to change at any time as our agreements change). There may be things outside of this list not considered and therefore not included in the calculation (i.e. implied requirements). The “things not done” on the list would make up the % not automated. So if I have a list of 20 things I wish to cover, and 10 cannot be automated since the cost outweighs the benefit I would have 50%. If there was only 1 “thing on the list” that I couldn’t test due to cost-benefit then I would have 90%. Note: As I wrap up writing all this I realize the number doesn’t seem to have any value and I am left wondering …What does 70-75% automation coverage mean? What is the difference between 50% automation and 90% automation in my example?
Useless Management Summary: 50% is good because we didn’t waste any money but 90% is good because it is almost 100%.
Doesn’t really seem that helpful, after all.
That is correct.