In the first post in this series, I proposed “that those things that we usually call ‘unit tests‘ be called ‘unit checks‘.” I stand by the proposal, but I should clarify something important about it. See, it’s all a matter of timing. And, of course, sapience.
After James Bach‘s blog post titled “Sapience and Blowing Peoples’ Minds“, Joe Rainsberger commented:
Sadly, the distinction between testing and checking makes describing test-driven development (TDD) somewhat awkward, because it’s a test when I write it and a check after I run it for the first time. Am I test-driving or check-driving?
Joe has put his finger on something that’s important: that in the mangle of practice, things are constantly changing, and so are our perspectives on them.
In The Elements of Testing and Checking, I broke down the process of developing, performing, and analyzing a check. The most important thing to note is that the check (an observation linked to a decision rule) can be performed non-sapiently, but that everything surrounding it—the development and analysis of the check—is sapient, and is testing. Test-driven development is first and foremost development, and development is a sapient process. The interactive process of developing a check and analyzing its outcome is a sapient process; the development cycle includes having an idea, testing it and responding to the information revealed by the test (the whole process), even when the result is supplied by a check (an atomic part of the test process). TDD is an exploratory, heuristic process. You don’t know in advance what your solution is going to look like; you explore the problem space and build your solution iteratively, and you stop when you decide to stop.
Several years ago, James and Jon Bach produced a set of exploratory skills, tactics, and dynamics:
- Modeling
- Resourcing
- Chartering
- Observing
- Manipulating
- Pairing (now called Collaborating)
- Generation and Elaboration
- Overproduction and Abandonment
- Abandonment and Recovery
- Refocusing (Focusing and Defocusing)
- Alternating
- Branching and Backtracking
- Conjecturing
- Recording
- Reporting
(I believe that several other people have made contributions to the original list, including Jonathan Kohl and Mike Kelly. I’d also include tooling—building tools, rather than merely obtaining or resourcing them, and orienteering—figuring out where you are in relation to where you want to be. I think James disagrees. That’s okay; good colleagues do that. The cool thing about such lists is that they can evolve as we think and learn more, and disagreeing helps us to figure out what’s important eventually. Maybe I’ll drop them, or maybe James will adopt them.)
The point is that these exploratory skills, tactics and dynamics apply not only to testing, but to practically any open-ended heuristic process. Note how TDD, done well, incorporates practically all of the stuff from James and Jon’s original list, which was focused on testing.
So the answer to the question in the title of this blog is this: No; there’s no need to rename TDD. It really is test-driven development.
As James replied to Joe,
Strictly speaking you are “doing testing” by “writing checks“, but not actually “writing tests.” If you run the checks unattended and accept the green bar as is, then that is not testing. It requires absolutely no testing skill to do that, just as you wouldn’t say someone is doing programming just because they invoke a compiler and verify that the compile was successful. If the bar is NOT green, the process of investigating is testing, as well as debugging, programming, etc.
If you watch the tests (James means checks here, I think –MB) run or ponder the deeper meaning of the green bar, you are doing testing along with the checking.
Think of “test” as a verb rather than a noun, and it becomes clear that test-driven design is truly test-driven design, although the testing is rather simplistic, based primarily on those little checks. Once the design is done the automated checks become useful as change detectors against the risk of regression. They certainly aid the testing process, despite not being tests.
Checks definitely do NOT drive development. Development is never a rote and non-sapient process. It’s far better to say test-driven, because the design of the checks is a thoughtful process.
So what of the earlier business about calling unit tests “unit checks“?
For me, the distinction lies in the artifact—that xUnit thingy, or that rSpec assertion—and the way that you approach it. A minor gloss on Joe’s comment: the thingy might not be a check after you run it the first time, especially if it doesn’t pass. At that point, it is still very much part of your conscious interaction with the business of creating working code; it’s figure, rather than ground.
After you’ve solved the problem that your unit of code is intended to solve, the thingy’s prominence fades from figure into ground. You’re no longer really paying much attention to it. There’s no design activity going on with respect to it, it gets performed automatically and non-sapiently, and its result gets ignored, especially when the result is positive and aggregated with dozens, hundreds, or thousands of other positive results. At that point, it’s no longer shedding any particular cognitive light on what you’re doing, and its testing power has faded into a single pixel in a pale green glow. It’s now a check, no longer a test but a change detector. In fact, you might think of “check” as an abbreviation for “change detector”. The change from a test to a check is a kind of reverse metamorphosis, as though an intriuging, fluttering butterfly has turned into a not-very-interesting, ponderous little green caterpillar. That’s not to say that it’s not an important part of the Great Chain of Being; just that we tend not to pay much attention to it. However, we might pay more attention to the caterpillar when it’s red.
As I’ve said repeatedly, what you call them is less important than how you think of them. As James says,
I wouldn’t insist that people change their ordinary language. I see no problem calling whales “fish” or spiders “insects” in everyday life. But sometimes it matters to be precise. We should be ABLE to make strict distinctions when trying to help ourselves, or others, master our craft.
At Agile 2009, Joe pointed out that if we can produce more code with fewer errors in it, we can get our products to real testing, and then to market more quickly. And that means that we can get paid sooner. So I agree with Joe here, too:
I have to admit I like the pun of Check-Driven Development, even if it only works in American English.
See more on testing and checking.
Related: James Bach on Sapience and Blowing People’s Minds
Funny, last week the same question was going through my mind, after i read the previous topic in this series…
In the reply to Joe, James says:
"Strictly speaking you are "doing testing" by "writing checks", but not actually "writing tests."".
I am still pondering about the following:
Is it wise to do "the testing" by "writing checks" by the same developer as the one creating the code? (and possibly risk tunnel visioning?). When you seperate these two steps, the actual checking (on running code) could be expanded with "testing".
@Michael…
Is it wise to do "the testing" by "writing checks" by the same developer as the one creating the code?
As usual, there isn't an absolute answer here. Here are some things that I might consider:
1. Is there only one programmer?
2. Could the programmer pair with another programmer?
3. Could the programmer pair with someone other than a programmer?
4. Is the programmer experienced at writing checks?
5. Is the programmer sloppy, hygenic, or obsessive-compulsive about his checks?
6. Is there reason to believe that the programmer needs help with doing this?
7. Is this a new or risky part of the code?
8. Are there already a bunch of checks for this code, written by this programmer?
9. Are there already a bunch of checks for this code, written by another programmer?
10. Is this code a total prototype, or is there a chance that it's heading for production?
11. Does the programmer appreciate help, or does he feel neutral, or does he reject it? Why?
12. Is the programmer going to be evaluated based on the checks? If so, how? How might the evaluation distort our bigger goals?
13. Does the initiative to check come from the programmer, his colleagues, his manager, or someone else? How might the mandate support or distort matters?
—Michael B.
[…] only with working software? Surely it makes more sense to measure progress with passing tests (or “checks” – whichever you […]
[…] only with working software? Surely it makes more sense to measure progress with passing tests (or “checks” – whichever you […]