A tester writes:
“I’m testing an API. It accepts various settings and parameters. I don’t know how to get access to these settings from the API itself, so I’m stuck with modifying them from the front end. Moreover, some responses to specific requests are long and complicated, so given that, I have no idea how to test it! Online examples of API testing tend to focus on checking the response’s status code, or verification of the schema, or maybe for correctness of a single string. How can I make sure that the whole response is correct?”
That reply may sound disconcerting to many testers, but it’s true. The good news is that there’s a practical workaround when you focus less on demonstrating correctness and more on discovering problems.
Among other things, being unsure whether something is correct suggests problems on its own. For instance, your question reveals an immediate problem: you’re not familiar enough with the product or how to test it. Not yet. And, I hasten to point out: that’s okay.
Being unsure about how to test something is always the case to some degree. As you’re learning to test, and as you’re learning to test a product that’s new to you, some uncertainty and confusion is normal. Don’t worry about that too much. To test the product you must learn how to test it. You learn how to test the product by trying to test the product—and by reporting and discussing the problems you encounter. Learning is an exploratory process. Apply an exploratory approach to discover problems.
For instance: I can see from your question that you’ve already discovered a problem: you’ve learned that your testing might be harder or slower without some kind of tool support that allows you to set options and parameters quickly and conveniently. Report that problem. Solving it might require some kind of help from the designers and developers in supplying APIs for setup and configuration.
If those APIs don’t exist, that’s a problem: the intrinisic testability of your product is lower than it could be. When testing is harder or slower, given the limited time you have to test, it’s also shallower. The consequence of reduced testability is that you’re more likely to miss bugs. It would be a good idea to make management aware of testability problems now. Report those problems. If you do so, you’ll already have given at least part of an answer when management inevitability asks, perhaps much later, “Why didn’t you find that bug?”
Moreover, if those setup and configuration APIs don’t exist, there’s a good chance that it’s not only a problem for you; it will probably be a problem for people who want to maintain and develop the product, and for people who want to use it. Report that problem too.
If those APIs do exist, and they’re not described somehow or somewhere, that’s a problem for you right now, but sooner or later it will be a problem for others. Inside developers who maintain the product now and in the future need to understand the API, and outside developers who want to use the product through the API need to be able understand it quickly and easily too. Without description, the API is of severely limited usefulness. Report that problem. Mind you, missing documentation is a problem that you can help to address while testing.
If there is a description of the API, but the description is inaccurate, or unclear, or out of date, you’ll soon find that it doesn’t match the product’s behaviour. That’s a problem for several reasons: for you, the tester, it won’t be clear whether the product or the documentation is correct; inside developers won’t know whether or how to fix it; and outside developers will find that, from their perspective, the product doesn’t work, and they’ll be confused as to whether there’s an error in the documentation or a bug in the API. Report that problem.
If those APIs do exist and they’re documented but you don’t know how to design or perform tests, or how to analyse results efficiently (yet), that’s a problem too: subjective testability is low. The quality of your testing depends on your knowing the product, and the product domain, and the technology. That knowledge doesn’t come immediately. It takes working with something and with the people who built it to know how to test it properly, to learn how deeply it needs to be tested, to develop ideas about risk, and to recognize hidden, subtle, rare, intermittent, emergent problems.
To learn the product, you’ll probably need to be able to talk things over with the developers and your other testing clients, but that’s not all. To learn the product well, you’ll need experience with it. You’ll need to engage with it and interact with it. You’ll need to develop mental models of it. You’ll need to anticipate how people might use it, and how they might have trouble with it. You must play, feel your way around, puzzle things out, and be patient as you encounter and confront confusion. The confusion lifts as you immerse yourself in the world of the product.
However, management needs to know that that learning time is necessary. While you’re learning about where and how to find deep bugs, you won’t find them deliberately. At first, you’ll tend to stumble over bugs accidentally, and you might miss important bugs that are right in front of you. Again, that’s normal and natural until you’ve learned the product, have figured out how to test it, and have become comfortable with your oracles and your tool sets.
Hold up — what’s an oracle? An oracle is a means by which we recognize a problem when we encounter one in testing. And this is where we return to issues around correctness.
After making an API call as part of a test, you can definitely match elements of the response with a reference—an example, or a list, or a table, or a bit of JSON that someone has else provided, or that you’ve developed yourself. You could compare elements in the response individually, one by one, and confirm that each one seemed to be consistent with your reference. You could observe these things directly, with your own eyes, or you could write code to mediate your observation.
If you see some inconsistency, you can suspect that there’s a problem, and report that. If each element in the output matches the reference, and all of the elements match the reference, and there don’t seem to be any extra elements in the output, you can assert that the response appears to be correct, and from that you can infer that the response is correct. But even then, you can’t make sure that the response is correct.
One issue here is that correctness of output is always relative to something, and to someone’s notion of consistency with that something. You could assert that the response seems to be consistent with the developers’ intentions, to the degree that you’re aware of those intentions, and to the degree that your awareness is up to date. Of course, the developers’ intentions could be inconsistent with what the project manager wanted, and all of that could be inconsistent with what some customer would perceive as correct. Correctness is subject to the Relative Rule: that is, “correct” really means “correct to some person, at some time”.
If you don’t notice a problem, you can truthfully say so; you didn’t notice a problem. That doesn’t mean that product is correct. Correctness can refer to an overwhelming number of factors.
Is the output well-formed and syntactically correct? Is the output data accurate? Is it sufficiently precise? Is it overly precise? Are there any missing elements? Are there any extra elements? Has some function changed the original source data (say, from a database) while transforming it to an API response? Was the source data even correct to begin with?
Did the response happen in a sufficiently timely way? The output seemed to be correct, but is the system still in a stable state? The output appeared correct this time; will it be correct next time? Was the response logged properly? If there was an error, was an appropriate, helpful error message returned, such that it could be properly understood by an outside programmer? In terms of the questions we could ask, we’re not even getting started here.
Answering all such questions on every test is impractical, even with tool assistance. You could write code to check a gigantic number of conditions for an enormous number of outputs, based on a multitude of conditions for a huge set of inputs. That would result in an intractable amount of test output data to analyze, and would require an tremendous amount of code—and writing and debugging and maintaining all that code would be harder than writing and maintaining debugging code for the product.
The effort and expense wouldn’t be worthwhile, either. After something has been tested to some degree (for example, by developers’ low-level unit checks, or a smattering of integration checks, or by some quick direct interaction with the product), risk may be sufficiently low that we can turn our attention to higher-risk concerns. If that risk isn’t sufficiently low (perhaps because the developers haven’t been given time or resources to develop a reasonable understanding of the effects of changes they’re making), more testing on your part is unlikely to help. Report that problem.
So rather than focusing on correctness, I’d recommend that you focus on problems and risk instead; and that you report on anything that you believe could plausibly be a problem for some person who matters. There are (at least) three reasons for this; one social, one psychological, and one deeply practical.
The social reason is important, but sometimes a little awkward. The tester must be focused on finding problems because most of the time, no one else on the project is focused on risk and on finding problems. Everyone else is envisioning success.
Developers and designers and managers have the job of building useful products that make people’s problems go away. The tester’s unique role is to notice when the product is not solving those problems, and to notice when the product is introducing new problems.
This puts the tester at a different perspective from everyone else, and the tester’s point of view can seem disruptive sometimes. After all, not very many people love hearing about problems. The saving grace for the tester is that if there are problems about the product, it’s probably better to know about them before it’s too late, and before those problems are inflicted on customers.
The psychological reason to focus on problems is that if you focus on correctness, that’s what you’ll tend to see. You will feel an urge to show that the product is working correctly, and that you can demonstrate success. Your mind will not be drawn to problems, and you’ll likely miss a bunch of them.
And here’s the practical reason to focus on problems: if you focus on risk and search diligently for problems and you don’t find problems, you’ll be able to make reasonable inferences about the correctness part for free!