Every now and then, in some forum or another, someone says something like “75% of the testing done on an Agile project is done by automation”.
Whatever else might be wrong with that statement, it’s a very strange way to describe a complex, cognitive process of learning about a product through experimentation, and seeking to find problems that threaten the value of the product, the project, or the business. Perhaps the percentage comes from quantifying testing by counting test cases, but that’s at least as feeble as quantifying programming by counting lines of code; more so, probably, as James Bach and Aaron Hodder point out in “Test Cases Are Not Testing: Toward a Culture of Test Performance”.
But let me put this in an even simpler way: If someone said “management in an Agile project is 40% manual and 60% automated” (because managers spend 60% of their time in front of their computers), most of us would consider that as reflecting a very peculiar model of what it means to manage a project. If some said that programming in an Agile project is “30% manual and 70% automated” (because most of the work of programming, that business of translating human instructions into machine language, is done by the compiler), we’d shake our heads over that person’s confusion about what it means to do programming.
Why don’t people have the same reaction when it comes to testing?