DevelopsenseLogo

Exploratory Testing on an API? (Part 1)

In one forum or another recently, a tester, developer or DevOps person (I wasn’t sure which of these he/she was) asked:

Do you perform any exploratory testing on APIs? How do you do it?

Here’s an in-depth version of the reply I gave, with the benefit of links to deeper material.

To begin: application programming interfaces (APIs) are means by which we can use software to send commands to a product to make it do something. We do perform some testing on APIs themselves; interfaces represent one set of dimensions or factors or elements of the product. In a deeper sense, we’re not simply testing the APIs; we’re using them to control and observe the product, so that we can learn lots of things about it.

Now, there’s a problem with the first question: it implies that there is some kind of testing that is not exploratory. But all testing is exploratory. The idea that testing isn’t exploratory comes, in part, from a confusion between demonstrating that a product can work and testing to learn how it does work, and how it might not work.

When a product—a microservice, a component, an application framework, a library, a GUI application,…—offers an API, we can use tools to interact with it. As we develop the product, we also develop examples of how the product should work. We use the API to provide an appropriate set of inputs to the product, operating and observing it algorithmically and repetitively, which affords a kind of demonstration.

If we observe the behaviours and outputs that we anticipated and desired, we declare the demonstration successful, and we conclude that the product works. If we chose to speak more carefully, we would say that the product can work. If there has been some unanticipated, undesirable change in the product, our demonstration may go off the rails. When it does, we suspect a bug in the product.

It is tempting to call demonstrations “testing”, but as testing goes, it’s pretty limited.  To test the product, we must challenge it; we must attempt to disprove the idea that it works. The trouble is that people often assume that demonstration is all there is to testing; that checking and testing are the same thing. They’re not.

In the Rapid Software Testing namespace, testing is evaluating a product by learning about it through exploration and experimentation, which includes to some degree: questioning, study, modeling, observation, inference, and plenty of other stuff.

Testing often includes checking as a tactic. Checking is the process of evaluating specific factors of the product by applying algorithmic decision rules to specific observations of a product, and then algorithmically indicating the results of those decision rules. Checking is embedded in testing.

Testing is fundamentally an exploratory process.  An exploratory process might include demonstration from time to time, and some demonstrations can be done with checking. Where do checks come from? They come from testing!

As a tester first encountering an API, you are learning about it. There is documentation in some form that describes the functions, how to provide input, and what kinds of output to anticipate. There may be a description of how the product interacts with other elements of some larger system. The documentation probably includes some new terminology, and some examples of how to use the API. And it probably contains an error or two. Making sense of that documentation and discovering problems in it is an exploratory process.

You perform some experiments to see the API in action, maybe through a scripting language (like Python or Ruby) or maybe through a tool (like Postman). You provide input, perform some API calls, examine the output, and evaluate it. You might experience confusion, or bump into an inconsistency between what the service does and what the documentation says it does. Wrapping your head around those new things, and stumbling over problems in them, is an exploratory process.

You may have heuristics, words and patterns and techniques that guide your learning. Still, there’s no explicit script to read or set procedure to follow when you’re learning about an API. Learning, and discovering problems in the product as you learn, is an exploratory process.

As you learn about the product, you begin to develop ideas about scenarios in which your product will be used. Some products perform single, simple, atomic functions. Others offer more elaborate sets of transactions, maintaining data or state between API calls. You will imagine and experiment with the conversations that programmers and other products might have with your product. Developing models of how the product will be used is an exploratory process.

As you learn more about the product, you begin to anticipate risks; bad things that might happen and good things that might not happen. You may have heuristics that help you to develop risk models generally, but applying heuristics to develop a product-specific risk model is an exploratory process.

As you learn about risks, you develop strategies and ideas for testing for them.  Developing strategies is an exploratory process.

You try those test ideas. Some of those test ideas reveal things that you anticipated. Some of them don’t. Why not? You investigate to find out if there’s a problem with the API, with your test idea, or with the implementation of your test idea. Interpreting results and investigating potential bugs is an exploratory process.

As your learning deepens, you may identify specific factors to be checked. You design and write code to check those factors. Developing those checks, like all programming, isn’t simply a matter of getting an idea and converting it into code. Developing check code is an exploratory process.

You may choose to apply those checks entirely in a repetitious, confirmatory way. On the other hand, every now and then — and especially when a new API call is being introduced or modified — you could induce variation, diversification, randomization, aggregation, data gathering, analysis, visualization, experimentation, and investigation into your work. Refreshing your strategies and tactics is an exploratory process.

In other words, practically everything that we do in testing a product except the mechanistic running of the checks is exploratory work. “Do you do any exploratory testing on APIs?” is equivalent to asking “Given a product with an API, do you do testing?”

The answer is, of course, Yes. How do you do it?  Find out in Part 2.

6 replies to “Exploratory Testing on an API? (Part 1)”

  1. application programming interfaces (APIs) are means by which we can use software to send commands to a product to make it do something.

    Michael replies: In one sense, the primary use case of an API is always to be used by another software program. It’s an application programming interface; an interface by which an application can use the product programatically. In another sense, the primary use case of an API is for some person (a programmer) to get something done by writing code that links her program to another program.

    An API is always a software-to-software link or connection to another program, always in the interest of fulfilling some human purpose.

    Reply
  2. 1. Thanks for the post.
    What if documentation is not there, and we do not know what expected results should be.
    How to validate the API Call response?

    Michael replies: a) if the documentation is not there, how will the intended users of the API be able to use it? b) If the documentation isn’t there, and you’re reverse-engineering, consider the Honest Manual Writer Heuristic.

    2. In the sentence: “Wrapping your head around those new things, and stumbling over problems in them, is an exploratory process.”

    There is a Hyperlink provided for Stumbling word, But redirects to Blank page.

    That used to be true, but I’ve fixed it… thank you.

    Reply

Leave a Reply to Testing Bits – July 8th – July 14th, 2018 | Testing Curator Blog Cancel reply