Note: This post refers to testing and checking in the Rapid Software Testing namespace. This post has received a few minor edits since it was first posted.
For those disinclined to read Testing and Checking Refined, here are the definitions of testing and checking as defined by me and James Bach within the Rapid Software Testing namespace.
Testing is the process of evaluating a product by learning about it through exploration and experimentation, which includes to some degree: questioning, study, modeling, observation, inference, etc.
(A test is an instance of testing.)
Checking is the process of making evaluations by applying algorithmic decision rules to specific observations of a product.
(A check is an instance of checking.)
You are not checking. Well, you are probably not checking; you are certainly not only checking. You might be trying to do checking. Yet even if you are being asked to do checking, or if you think you’re doing checking, you will probably fail to do checking, because you are a human.
You can do things that could be encoded as checks, but you will do many other things too, at the same time. You won’t be able to restrict yourself to doing only checking.
Checking is a part of testing that can be performed entirely algorithmically. Remember that: checking is a part of testing that can be performed entirely algorithmically.
The exact parallel to that in programming is compiling: compiling is a part of programming that can be performed entirely algorithmically. No one talks of “automated compiling”, certainly not anymore. It is routine to think of compiling as an activity performed by a machine.
We still speak of “automated checking” because we have only recently introduced “checking” as a term of art. We say “automated checking” to emphasize that checking by definition can be, and in practice probably should be, automated.
If you are trying to do only checking, you will screw it up, because you are not a robot. Your humanity—your faculties that allow you to make unprogrammed observations and evaluations; your tendency to vary your behaviour; your capacity to identify unanticipated risks—will prevent you from living to an algorithm.
As a human tester—not a robot—you’re essentially incapable of sticking strictly to what you’ve been programmed to do. You will inevitably think or notice or conjecture or imagine or learn or evaluate or experiment or explore. At that point, you will have jumped out of checking and into the wider activities of testing.
(What you do with the outcome of your testing is up to you, but we’d say that if your testing produces information that might matter to a client, you should probably follow up on it and report it.)
Your unreliability and your variability is, for testing, a good thing. Human variability is a big reason why you’ll find bugs even when you’re following a script that the scriptwriter—presumably—completed successfully. (In our experience, if there’s a test script, someone has probably tried to perform it and has run through it successfully at least once.)
So, unless you’ve given up your humanity, it is very unlikely that you are only checking. What’s more likely is that you are testing. There are specific observations that you may be performing, and there are specific decision rules that you may be applying. Those are checks, and you might be performing them as tactics in your testing.
Many of your checks will happen below the level of your awareness. But just as it would be odd to describe someone’s activities at the dinner table as “biting” when they were eating, it would be odd to say that you were “checking” when you were testing.
Perhaps another one of your tactics, while testing, is programming a computer—or using a computer that someone else has programmed—to perform checking. In Rapid Software Testing, people who develop checks are generally called toolsmiths, or technical testers—people who are not intimidated by technology or code.
Remember: checking is a part of testing that can be performed entirely algorithmically. Therefore, if you’re a human, neither instructing the machine to start checking nor developing checks is “doing checking”.
Testers who develop checks are not “doing checking”. The checks themselves are algorithmic, and they are performed algorithmically by machinery, but the testers are not following algorithms as they develop checks, or deciding that a check should be performed, or evaluating the outcome of the checking. Similarly, programmers who develop classes and functions are not “doing compiling”. Those programmers are not following algorithms to produce code.
Toolsmiths who develop tools and frameworks for checking, and who program checks, are not “doing checking” either. Developers who produce tools and compilers for compiling are not “doing compiling”. Testers who produce checking tools should be seen as skilled specialists, just as developers who produce compilers are seen as skilled specialists. In order to develop excellent checks and excellent checking tools, a tester needs two distinct kinds of expertise: testing expertise, and programming and development expertise.
Testers apply checking as tactic of testing. Checking is embedded within a host of testing activities: modeling the test space; identifying risks; framing questions that can be asked about the product; encoding those questions in terms of algorithmic actions, observations, outcomes, and reports; choosing when the checking should be done; interpreting the outcome of checks, whether green or red.
Notice that checking does not find bugs. Testers—or developers temporarily in a testing role or a testing mindset—who apply checking find bugs, and the checks (and the checking) play a role in finding bugs.
In all of our talk about testing and checking, we are not attempting to diminish the role of people who create and use testing tools, including checks and checking. Nothing could be farther from the truth. Tools are vital to testing. Tools support testing.
We are, however, asking that testing not be reduced to checking. Checking is not testing, just as compiling is not software development. Checking may be a very important tactic in our testing, and as such, it is crucial to consider how it can be done expertly to assist our testing. It is important to consider the extents and limits of what checking can do for us. Testing a whole product while being fixated on checking is like like developing a whole product while being fixated on compiling.
Hello, Michael.
Great post, I totally agree with your opinion, yet I have some questions.
1. In “Testing and checking refined”, you and James made the distinction between human and machine checking. With the post here, you get to the conclusion, at least based on my understanding of it, that human checking isn’t exactly checking, as checking could be performed only by the machines. Do we still have the distinction of human and machine checking, or we abandon the concept of human checking?
Michael replies: Humans can do checking; humans can perform checks. There are at least two things that humans can’t do with respect to checks, though: 1) to be conscious of all the checks that they’re making; and 2) to restrict themselves only to checking.
2. “Checking is not testing, just as compiling is not software development”, but in “Testing and checking refined” again, you are talking about checking as being a “subset” of testing, e.g. activity performed during testing, therefore checking is testing, partially. Isn’t it more correct to say testing is not limited to just checking?
I thought that’s what the last paragraph was all about. I’ve made a couple of minor modifications to the text above it. Is it unclear now?
Thanks for the post and for your time in advance.
I’ve had a problem with “checking” from the early days. In the twitter exchange in this post:
http://testers-headache.blogspot.se/2009/09/sapient-checking.html
the point I was highlighting in 2009 was that checks don’t stand on their own.
The word “checking” always bothered me – or really, it’s definition – it’s misleading to talk about checking as it can get interpreted that someone or something is “doing” checking. In fact, it’s common to hear of “checking being done” or “doing checking”. Whereas in most cases checking is “happening”- it is a usually a check (or several) that is (are) being executed, which is a part of testing.
So, checking can be observed but not done. And that’s part of what makes me uncomfortable with the way checking is defined.
Michael replies: I don’t see the confusing part; certainly not in light of what we wrote in 2013:
Checking is the process of making evaluations by applying algorithmic decision rules to specific observations of a product.
Again, an analogous activity in programming is compiling. Are you suggesting that compiling can be observed but not done?
There might be good (or bad) testing happening but there is no checking – or really I’d rather not hear about checking but that testing is happening – perhaps aided by checks – but not aided by checking.
In development, there’s a process of putting a product together in a form called a “build”. People prepare machinery to produce builds. While that’s happening, would you rather we speak of development so you don’t have to hear about building? Would you resist saying that the process of deployment being accelerated by automated building?
I’ve used “checking” myself in posts, but I’m not really happy about it – because it’s not precise. So, I’d be happy (or happier) if/when you retire the word “checking” (or it’s current definition) – then maybe others will stop using the word and be forced to be more accurate,
Or perhaps an alternative definition: checking is a subset of testing where checks are deployed/used. (I need to think about that…)
Is it possible that you’ve been using your own definition of checking, in shallow agreement with ours?
Hi Michael, Simon,
I always found the distinction between testing and checking useful if not without its problems.
My understanding is that indeed, checking can be done. It’s a process of evaluations so can be done. That it’s part of testing to me is only of secondary interest.
I have two points here, one is that testing and checking is not binary. It’s not one or the other, both can be carried out at the same time. I, as a human, can test and check simultaneously. While my fingers are executing the steps that lead to a check (evaluating a yes/no answer) I may think about the next step already.
The other point is rather a question – what does “doing” checking really mean? If I go back to ye olden days of test scripts there were at least two parts, test design and test execution (overly simplified but useful here). While a human is needed to design the check I wouldn’t call it “doing checking”. I would call it check design or creation (which can incidentally be part of the test design, test execution, learning,… cycle). When executing the check – carrying out the algorythmmic evaluation – that’s what I’d call “doing” checking so it could be done by a machine or by a human.
I get that with a human there is usually more going on, but there doesn’t have to. My powers of ignorance and filtering out the things I don’t want to see, am too tired to or too demotivated to care about are quite powerful, so yes, I can do only checking although I agree that this doesn’t happen often.
Michael replies: I don’t think you can do only checking. If you intended to check, you started by being engaged in a testing activity.
@Simon – I’m not sure that checks don’t stand on their own, I think they do. What would you call a suit of automated checks otherwise?
Checking to me is not a subset of testing but one component that may or may not be present (it often is).
I don’t know how you’re distinguishing here between “subset” and “component”.
Notice – when I say I have a problem with “checking” I mean the word (the grouping/label of a process and how it is used) and not the use of “checks” or the definition of a “check”.
@Thomas
An example of “doing checking” I’ve seen cases of people equate “regression testing” with “checking”, why, because (my interpretation) it’s an execution of a series of checks. (I checked I can still see the comments on twitter) Now I pointed to a post at the time that hopefully illustrated that that was a bad definition of regression testing. However, the point is, I have seen people see “checking” and forget the context – a little bit what Michael’s post is talking about. People have missed where the “testing” is happening – and that was also the point of my 2009 twitter exchange.
“checks don’t stand on their own” – you need to read that in the context of my 2009 twitter exchange – i.e. I’m talking about testing encapsulating checks.
————————
I’ll try to re-state my comment about “checking” – this is a label that should only be used (imo) for an observation of something that is happening or has happened. It’s a post event rationalisation (it’s “a posteriori” knowledge). If you state your intention to include checking in your activities you are really describing your testing – because the intent, analysis, selection and discussion of results is testing – even if checks were used.
Then I would find it more accurate to talk about the testing that /will be/ aided by checks. Afterwards it might be accurate to describe the testing has included checking. Planning checking, intending checking is by implication testing – and I (personally) don’t see the added value and I question it’s accuracy.
But, that doesn’t, of course, stop anyone using the term however they want and in ways they find useful.
Cheers/S
[…] You Are Not Checking Written by: Michael Bolton […]
@Simon, thanks for the thoughtful reply.
I got your problems with “checking” vs the term “check”.
I’ve seen regression testing called checking and are actually guilty of it myself. I wouldn’t use it like that anymore.
The reason for using it in such a way for me was to describe bad testing. Trying to replicate the exact steps as done previously (nothing bad with that) and ignoring any other observations or changed behaviours (that is bad) because one is doing “only” regression testing is actually very close to what a machine would do, i.e. checking. I’ll probably just stick with calling it bad testing.
Michael replies: Checking is not bad testing! Regression checking is not bad testing! Repeating a test (to the degree that a test can be repeated) is not bad testing! There is nothing intrinsically bad about any of these things. It’s like saying that hypodermic needles are bad because some people use them to sustain an addiction to harmful drugs.
If the only element in your strategy to prevent disease is hypodermic injections, and nothing else, you could say that that’s a bad approach to disease prevention.
Where I have to disagree is with this statement “Planning checking, intending checking is by implication testing…”
Yes, planning checking is a testing activity. Why would it not be?
The problem I see here is with the “automate everything” people. As Michael has pointed out in one his last tweets if you plan and intend your regression tests to be exclusively automated checks you can’t claim it’s testing.
Yes you can. Of course you can. It IS testing. It’s just not all there is to testing. You can claim it’s testing, because it is. You can’t claim legitimately or that it’s appropriate or relevant or valuable or reasonable testing without some supporting argument.
You can only say we’ve checked but we haven’t tested. For this reason I’m comfortable with checking as an activity and process. It can be planned alongside testing, it can be part of testing but it can also stand alone even though most professional testers should have arguments why that is a bad idea.
You’ve got the “only” misplaced, I think: you can say “we’ve tested, although we’ve only checked.” If all you’ve done is checking, it’s probably shallow testing. But if you’ve checked, you’ve tested.
Thanks to both of you for this discussion and making me think.
Nicely summary of what I do, because I have a devil of a job explaining it to anyone! Because I have been very focused on test engineering, and been on several BDD (aspiring) projects, checking becomes automated test by default, rather than the common approach of a “nice to have”.
Michael replies: I’m not sure where the difficulty lies. I’d say “It’s my job find problems that threaten the value of the product, so I develop and use tools to assist with that.” If there’s confusion about checking and its role with testing: “It takes more than a correct result to make customers happy, and a product can have tons of problems even though it appears to be providing correct results.”
Recently I was working in environment dominated manual testing/checking environment, though they desperately wanted to change. Common pitfall thinking of “we never have time to do things properly, but we always have time to do them again”. It operated in headless-chicken mode, at most times. Too much pointless repetitive manual checks, no formal ux design process, silo-working, and a very fragile build pipeline. Good people stuck in bad process.
There’s a testing report right there.
Interesting to view that kind of environment, after years of test engineering focus. I worked in that way a lot in past years, and it was painful in hindsight! Resisting automation tests in software development is kinda odd, as it’s normal software engineering practice. And Automation tests are not static (or shouldn’t be) – they evolves with the app under test, so it should always be useful. Aspire to automate everything, because it is sometimes surprising what you can automate usefully (“useful” is important word to keep in mind – is this test useful? Is it better as API test? Is the developer unit test enough? do we still need this test or should I just update it? Developing in test is busy process!
If you want to avoid misinterpretation and the trouble that it leads to—kind of trouble you refer to here—I’d recommend a reframe: refer to them as “automated checks”, instead of “automation tests”. For the stuff that isn’t checking, refer to “tools”. You can’t automate testing, just as you can’t automate research, parenting, invention, management, or programming. As you point out, you can automate certain tasks withing testing. What you can’t automate is the thought process that allows you to choose tools; to use them appropriately; to decide when to stop using them; to evaluate whether programmers are doing enough unit checking and other kinds of testing on their own.
The acceptance testing at business level always requires a visual verification, which I guess is “testing”, in context of this article. In fact those people should always be testing (unfortunately product owners commonly twiddling their thumbs waiting for jira issue to pop up in their to-verify queue). And thinking up new features while they wait 🙂
The “visual verification” that you’re talking about here is probably not testing, but demonstration. It’s not a bad thing to do; it’s nice to show the product doing something potentially useful. But remember that salesmen, stage magicians, and TV pitchmen demonstrate all kinds of things that intentionally limit information about the product. It’s the testers job to find the inconsistencies between the product and the sales brochure, the misdirection in the magic trick, and the missing details in the pitch.
Thanks for the comment.
[…] also see that Testing has been elevated to include refined definitions of Checking. And in “You Are Not Checking“, Michael Bolton has tried to clarify that humans aren’t (likely) Checking even if they […]