Pradeep Soundararajan is a colleague of me and of James Bach. Pradeep would say he’s a student, but in this case the student has surpassed this teacher. Pradeep writes and tests and thinks with passion. In a recent blog post, he came up with this gem:
“…it is not a test that finds a bug but it is a human that finds a bug and a test plays a role in helping the human find it.”
That’s very insightful. It puts the tester, rather than the test, at the centre of testing. It underscores the idea that we produce and perform tests with the intention of revealing information, but until some human observes and evaluates some outcome of the test, the test is silent. It also emphasizes that a test might provide us with the opportunity to observe one bug or several.
Bravo, Pradeep. I’ll be quoting that a lot.
4 replies to “If a test passes in a forest, and no one sees it…”
Michael — how about teams that have interesting ways of being alerted when the continuous integration builds/tests fail, such as with Lava lamps? Then you can say that the test is not silent, but screams (in a visual fashion in this case). You’ll reply that it still takes human eyeballs to watch that lamp, or hear that sound, etc., but I think you know what I mean.
I believe I know what you mean, and your conclusion is correct. I think you’re confusing “the means by which we are notified of some inconsistency” with “knowing something”–confusing the tool with the task. This is a superset of Pradeep’s observation, in a way. The Lava lamp doesn’t find the bug because the test doesn’t find the bug either. The lamp and the tests are media, in the McLuhan sense; things that extend our ability to find bugs (and, when taken beyond their capacity, reverse into not finding bugs).
There is no significance to the Lava lamp or the test until human eyeballs suggest to human brains that we might want to understand something about it. The failing test doesn’t tell you that there’s a bug. It tells you that you might want to investigate something. The results of that investigation could be:
You don’t have a bug in the product. The test fails because it’s an old test that you forgot to remove.
You don’t have a bug in the product. The test fails because it contains its own error.
You do have a bug in the product, but now that you look at it, you realize that the function is redundant anyway, so you refactor it out and remove the test.
You have three bugs in the product. The test only alerts you to one of these–the trivial one–, but on further inspection, you realize that there are two other serious bugs in the same area, bugs for which you had no tests.
It’s this last one that’s the most interesting implication of Pradeep’s statement. My friend Dan Spear–one of the best developers I’ve ever known–was developing a little programming course back in the 90s. Very early on–because it was an assembler course–he introduced us to DEBUG. He pointed out that DEBUG would neither find nor fix our bugs; we had to do that, and often we would see more in the debugger than we had anticipated. Similarly, failing tests aren’t problems; failing tests point to possible problems. The sapient human–not the failing assertion in the test, not the lighting of the lamp–is what finds the bugs.
By the way, here’s one for fun, from Pradeep himself: You have a bug in the product, but the test didn’t find it; the bug is such that when one particular function is executed, it erroneously sends data down the USB port, lighting the lamp.
Thanks for the response, Michael. You’re right, it does take a human being at the end of the day to make sense of any alerts, or failed tests, etc.
My point though, and I believe you’re familiar with it from our crossing swords in the past :-), is that the assistance offered by automated tests is invaluable. To me, there’s simply no way that a human being can stay on top of doing regression testing without having automated tests.
That being said, I totally agree with you that manual, brainy, exploratory testing is vital — see my latest blog posting for such an example: http://agiletesting.blogspot.com/2007/09/beware-of-timings-in-your-tests.html
I feel blessed to have you and James as my gurus.
I want to ensure that every second and minute that you people have spent for me is converted to value addition for you and testing community and at last myself.
My thoughts and ideas are largely influenced by you, James, Jerry Weinberg, Dr Cem Kaner, Jon Bach, Ben Simo, my dear Indian testers and to all those who have spent (and will be spending) their valuable time for me helping me to learn to become a better tester.
I must also mention about great testers who blog and help me think more new ideas, daily. Thanks to people like Michael Hunter, Jon Kohl, Mike Kelly, Karen Johnson, Scott Barber, Shrini Kulkarni, Elizabeth Hendrickson, Harry Robinson… ( there are a lot of people)
I thank you all and God. I hope I become a good value addition to the future of testing community.
Bless again, Michael, and I shall be able to notice the bugs that the Lava Lamps fail to signal me!