Despite all of the dragons that Agile approaches have attacked successfully, a few still live. As crazy as it is, the idea of one test check per requirement has managed to survive in some quarters.
Let’s put aside the fact that neither tests nor requirements are valid units of measurement, and focus on this: If you believe that there should be one test per requirement, then you have to assume that each requirement can have only one problem with it. Is that a valid assumption?
The one-check-per-requirement idea is part of the same mindset that equates “requirement” with “functional specification”.
A single functional specification typically corresponds to a single pass/fail test. Either the system does it, or it doesn’t.
Michael replies: Hmm. I might agree with you, but I’m not sure how to parse your first sentence. By “single functional specification”, do you mean a single line in a functional specification document? A single assertion, or a single proposition? I think if we unpack even a “single” proposition (that is, an ordinary statement), we’ll find lots of assumptions which are themselves propositions. There’s a “turtles all the way down” issue there.
The mistake is that a functional requirement is really just the context for multiple nonfunctional requirements. Functional requirements aren’t very interesting. All of the conditions we attach to them (usability, reliability, etc.) are much more interesting.
I prefer the Cem Kaner’s term “parafunctional requirements” to avoid such weird sentences as “all the nonfunctional aspects of the program are working.”
Which is why it’s so important to attach acceptance criteria (nonfunctional requirements) to user stories (functional requirements).
Hmm. I’ll have more to say about acceptance criteria in a forthcoming blog post. Thanks for the commment.
There’s an “it depends” card to be played here.
Without knowing the content/description of the requirement or the test (idea) it’s difficult to say this is a general idea.
The test idea could be very open ended to allow for multiple observations – or indeed observing without a pre-determined test idea. (I think back to my favourite “open-ended” observation where the hubble pointed to an “empty” bit of sky and it took 3 months before the new constellation was spotted – was that a great “design of experiment” or chance?)
I’ve never encountered the 1-1 (requirement-test) idea (at least in my circles – maybe I’m just lucky) and if there is any mapping (framing) the first observation I make is always to look for the holes/gaps to judge whether they are intentional, misses or the whole is “work in progress”.
There’s no problem here. Just create one test that is capable of discovering every problem.
— James
[…] This post was mentioned on Twitter by Bob Marshall, Ben Simo, Michael Bolton, Simon Morley, x trax and others. x trax said: RT @qualityfrog2011RT @michaelbolton: Possibly my shortest-ever blog post: One Test Per Requirement. http://bit.ly/h4m371 #testing #a… […]
“One test to rule them all !”
Your assumption premise One Test- One requirement-One Problem is wrong. What i see you saying is “A is parallel to B, B is parallel to C, so A is parallel to C”. Which is not true. Here is why?
One test per requirement – Totally depends on how we write/break/define “Test” and “Requirement”
One test tackles one problem – implies – as if Test & Problem always have 1/1 mapping. Yet again very ambiguous statement.
Michael replies: I’m aware that the premise is wrong; that’s my point. Please don’t say it’s mine.
->James
…could you please pass me that test as well, when you have it :).
I agree with Simon. There is no standardized definition of requirement (I guess this will never happen). So, everyone can fill in the concept with whatever s/he wants.
Impossible then to use it as a measurement unit. This is a Middle-Age situation: the league was very different from one city to another.
Michael replies: The idea of a requirement is a construct. In fact, all categorizations or classifications are constructs. There’s a wonderful description of the construct problem in Experimental and Quasi-Experimental Designs for Generalized Causal Inference. In 2000, Spanish astronomers discovered 18 objects that they call planets. Other astronomers wanted to call them brown dwarfs, objects too big to be planets but not big enough to trigger thermonuclear reactions that would cause them to become stars. Yet the objects were too small and too cool (and thus too young) to be classified as brown dwarfs. As Shadish, Cook, and Campbell point out, “All this is more than just a quibble: If these objects really are planets, then current theories of how planets form by condensing around a star are wrong.”
Now, brown dwarfs, planets, and stars have concrete, physical existences. They have characteristics (mass, age, colour, temperature) that can be observed and measured in units that have general widespread acceptance. Even the various forms of league, although disputed, are at least mostly commensurable against another unit like a kilometer. (That’s not always the case; according to Wikipedia, the “league” in Mexico is used to describe the distance one can travel in an hour, so it varies depending on the terrain.) But if we think the league is a problem, measuring a requirement—some desire by some stakeholder, and all of the factors that contribute to that desire and its potential fulfillment—is inconceivably harder. I agree with you, Miguel, that it won’t happen. So let’s drop it.
Thanks for the comment.
I think that your assumption is correct and I’ve seen such reqs once. “Analytics pain vs testers pain”:) Over 100 pages for one small app like WinCalc.
If the one test per requirement takes place in the normal situation there will be a dozens of steps. And I’ve seen such test – there were over 100 steps in it.
I’m with James in this. The definition of test doesn’t say how many steps should be taken. If the requirement consists of a complete “story” and isn’t broken down any further details, why should any testing be broken down into different cases.
Michael replies: Note that “step” suffers from exactly the same kind of construct problem as “test” and “requirement”. In the middle of this post, I refer to that problem.
You could even argue that the one check that finds all problems is covered by a single unit test and no non-programmer will ever need (be allowed?) to verify anything beyond that.
a. Perhaps people are secretly applying the advice, “Anything not worth doing is not worth doing right.” – Weinberg QSM v4
b. Perhaps what is really happening is the application of an “Aggregate Control Model”, e.g., testing with shrapnel. [Weinberg again] Advantage – simple to explain and defend. Send enough simple checks whizzing through the product’s requirements and you will tend to hit bugs/issues. The controller of this system wants simple answers: “Yes” or “No”. Answers of “some”, “but …”, “let me explain …”, just trip the flashing-red-zero response in the controller. Generate that response enough, and expect the controller to go unpredictable.
c. I suspect the value of one-requirement-to-one-check is not about the information provided to the stakeholder about the product. Rather the value to the controller is the familiarity. A weak but familiar project control mechanism is preferred to a new unfamiliar (maybe better) project control mechanism.
c1. Or, there is a meta-controller with a limited interface that seems to demand this structure, and seems placated when it is provided.
c2. The controller may be acquiring good information about the product through a second alternative checking/testing process that is controlled and constrained in a different way.
I don’t think you can assume there is only one problem, since if you haven’t tested for them, then how would you know?
Michael replies: Precisely. The same would go for “expected result”, but that problem goes away as soon as you get all your expectations listed. (irony alert!)
It may be that you can say ‘we only care about this single problem, hence we don’t want to put test resources into exploring further’, but that isn’t a situation I’ve ever found myself in.
Me neither, to the first part. Whenever someone says, “We only care about this single problem,” in my experience it’s not hard to propose another risk and trigger a furrowed brow. However, the conclusion that we don’t want to put test resources into exploring further could quite easily be reasonable.
I’ve always thought that a requirement can be satisfied by a number of features in your system, then each feature will need a number of tests. If a requirement can be covered by a single check, then that is by accident rather than by design (in my experience).
I think (so far) that if we seriously believe a requirement can be covered by a single check, a number of premises must be fulfilled. 1) It’s a pretty low-level requirement; 2) The risk of the problem for which the check checks is relatively high; 3) The risk of problems for which other checks might check is relatively low; 4) The cost of those other checks is relatively high; and 5) The value of those other checks is relatively low. At the programmer/unit test level, those assumptions seem reasonable to me. At higher levels, not so much.
“What’s the chance of meeting a dinosaur on the street? — 50%! — Really! Why? — You’ll either meet one or you won’t!”
Paraphrasing the joke.
“Each requirement can have only one problem: either met or not”.
But hold on, ever heard of a customer saying “Gee! I’ve got a problem with the program: it does not meet the requirements”. Probably, not. Most likely, it would be: “I don’t care what requirements it meets, the program does not solve my problem!”
“A problem is solved when it gets tougher” [Arabic proverb]
😉
Previous verse is from the Quran, and not an Arabic proverb, apologizing -> lesson learned: don’t propose a solution if you don’t have enough of imput.. but than again – how much is enough .. and here we go – its already a bit tougher.
Does that mean that I’m getting closer to the solution? 😉
In answer to your question: No. Even if the requirement is insanely simple and specific (eg. add 1 and 1 to get 2), there are plenty of things that could go wrong! Here are just a few examples:
-Assuming that the values in the equation are preset, a mistake could be made in typing them in (eg. 11 instead of 1)
-The hardware could be faulty (eg. the floating point bug in Intel’s Pentium chip)
-Assuming the values are stored in a variable, some other function could use the same variable name, and change the value
Right there, we have 3 problems that could arise as a result of what seems to be a simple and clear requirement.
Well, nitpicking on the words and terms and on a more theoretical/philosophical point of view, there could be a rather massive test (check) for a commonly usual requirement (as mentioned by others already [‘One test to rule them all’]). Or there could be very small requirements (‘Here I am, a requirement the size of a test’).
Michael replies: I like what James had to say about critical analysis of words here: “It’s not nitpicking—or if it is, that’s beside the point. We must all have the ability to nitpick, when that is required for understanding.” Surely the point of this tiny post and the comments that follow is that tests and requirements are not things, and to think of them as things is reification error.
On a practical side I’ve seen a lot of places with *at least* one test (check) per requirement.
And there have been situations with more than one requirement per test (either implicit or explcit).
It seems to me that once we’ve decided that “test”, “check”, or “requirement” as units of measurement are invalid, the notion of “at least one test (check) per requirement” is an invalid statement.
To tell which way is the/a correct (or good) way is very likely context-dependent.
Yes. And context depends on qualitative evaluation; “who”, “what”, “when”, “where”, “why”, and “how” questions, far more than on “how-many” questions.
“One Test Per Requirement”
Hell yeah , makes it easier for people who love to count number of tests, using their fingers 😉
— @sunjeet81
Being the author/presenter of the slideshare presentation that Michael worries about in his tweet yesterday, I must say that you guys got me wrong. That could be because English is not my native language and I meant something different than native speakers understand. Sorry for that.
Michael replies: I was hoping that that was the case, and I emphasize it was not my intention to single you out.
In my presentation (http://www.slideshare.net/AnkoTijman/ijm-31jan2011-building-a-quality-driven-team) there are two statements:
p7) Shouldnt it be so that requirements equal test cases
p 10) Requirements equal test cases
Well, what I did *not* mean was: one test (case) will do just fine to test every requirement.
What I *did* mean was: the test cases are the *real* requirements! I believe there is a quote by Boris Beizer that goes like “The design isn’t done until the test cases are ready” (1). I plead to that. From a good test script, a lot of questions on the functionality of the system will rise!
Oh, I do hope that this is a language issue, too, because test cases, checks, are not real requirements. At best, test cases are representations, simplifications, explications, expressions, examples of implications of real requirements.
A requirement is a desire that must be fulfilled for the program to be considered successful. At best, a test case can suggest that the product appears to fulfill that requirement to some degree in some circumstance on someone’s machine. No passing test case can tell you that the requirement has been fulfilled. (A failing test case, however, can suggest pretty strongly that some requirement has not been fulfilled.) This point becomes even more crucial when the test case becomes a check and we relax our questioning of it by delegating it to a machine along with thousands of other checks. Elisabeth Hendrickson says—and I agree—that the development work can’t be called done until the product has been checked and explored (I say tested, instead of explored, but we mean pretty much the same thing, I think). The problem with this business of “one (or even “at least one”) test case per requirement” is that it leads, I think, to quantitative thinking about test coverage, and I see that as a serious threat to the quality of testing.
When you’re in an agile (multidisciplinary, iterative) project, you will need to discuss the requirements with *the team*. Just a simple user story doesn’t give that much information. As a tester, you have a lot of questions about a requirement! And any good tester can derive dozens of test cases from just one requirement. So actually, the test cases are the real requirements! That is what I meant to say with the slide, and that is what I told the audience with that slide.
Now there you’re on to something. When you suggest that conversation, the interactive, social process of questioning and exploring the requirements through the process of creating test cases, is crucially important, I could certainly agree with that. Automated checks, though, are side effects of this process, and not what I (or you, evidently) would consider the primary product of it. The primary product is something more intangible: understanding.
(1) Although not literally, the thought of this quote is probably based this passage of Software Testing Techniques: “More than the act of testing, the act of designing tests is one of the best bug preventers known. The thinking that must be done to create a useful test can discover and eliminate bugs before they are coded – indeed, test-design thinking can discover and eliminate bugs at every stage in the creation of software, from conception to specification, to design, coding and the rest.”
– Boris Beizer – Software Testing Techniques, Creating a Software Engineering Culture by Karl Eugene Wiegers , ISBN: 0932633331 , Page: 211. Taken from: http://www.softwarequotes.com/showquotes.aspx?id=558
One test per requirement is fine, as long as:
– Requirements are perfect in all sense and that they cover all aspects
– A test is infinite until you have covered the whole requirement
We know that these two facts are hard to achieve, thats why it is not possible. It is too costly to try to write all requirements and it is too costly to think that we would cover all requirements with tests.
Yes, the main product of AcceptanceTDD is the understanding of the requirement: not only the ‘technical’ side (what software should be build) but also the ‘business’ side (why is it important to the customer).
I use this definition of communication: ‘to misunderstand eachother as less as possible’. We will misunderstand requirements as a team, but by taking the multidisciplinary view on requirements (the customer included!) we avoid rework as much as we can. That is the quality driven, every-step-is-the-right-step approach my presentation was about.
[…] Michael Bolton Possibly my shortest-ever blog post: One Test Per Requirement.http://bit.ly/h4m371 #testing #agile #softwaretesting […]
Michael,
If I draw some parallels with what I have been seeing (more than a decade) in IT – this idea of 1:1:1 mapping between req:test(check):bug arises out of need for “simplicity” (simple is beautiful and more likely to be true – Occam’s razor). Many managers and stakeholders demand traceability matrix that is used as a tool to demonstrate the coverage of testing.
Michael replies: I haven’t yet written much about legibility in testing—an idea that comes from Seeing Like a State, but I want to and at some point I will. I’ve found it to be a provocative and revealing concept.
The 1:1:1 model helps stakeholders question – “Is this requirement covered?” If “yes” – then how many test cases cover *this* requirement? (Many the merrier) and so is the case with bugs. When you think about mapping between req:test/check:bugs it is not 1:1:1 (at least in IT) it is 1:many:many.
But the fundamental problem with this model (as you pointed out) – requirements, test cases and bugs are NOT countable items but are relationships. That makes the whole thing fall apart and Reification rules.
Now to your question of validity of assumption on *only* one problem per requirement – I see that people in IT don’t appear to make that argument. Their main problem appears to be “Traceability Matrix” and demonstration that tests (checks) there are available and that are *executed* – cover (at least in 1:1 sense) with *documented* requirements. That’s all. Some intelligent fellas take this matrix further and put the bugs in there and contemplate on “where are bug clusters”? Which is most buggy requirement (misnomer !!!)
Shrini
Yes, the documented requirements trap is a trap, for sure. Thanks for the reply, Shrini.
[…] One Test Per Requirement Great! […]