Garbage truckloads of marketing bumph are being dumped into the testing space about “codeless” testing tools. For the companies producing these tools, to “test” seems to mean “performing a sequence of keystrokes or mouse clicks or button presses on an app”. (You can see the same pattern in many tutorials on “test automation”; write a script that executes a sequence of actions, and that’s a “test”.) But the marketing material is mute on how the tool aids the tester in recognizing problems in the product. The marketing focus is on how quickly the product can repeat some sequence of keystrokes and clicks and pushes.
Automated data entry can be a useful part of a test, but automated data entry is not a test. Alas, the marketing approach works really well on managers (and, sadly, some testers) who can perceive only the visible activities of testing; not the cognitive aspects of it, and not the essential mission of testing: revealing the status of the product, and finding problems before it’s too late to do anything about them.
In Rapid Software Testing, testing is the process of evaluating a product by learning about through experiencing, exploring and experimenting, which includes to some degree questioning, studying, modeling, observation, inference, critical thinking, risk analysis, etc. Above all, testing requires us to focus on the risk that there are problems in the product, to anticipate problems, and to recognize problems that are present. That requires oracles; an oracle is a means by which we recognize a problem when we encounter one in testing.
The “no-code automation” tools supply weak oracles at best; typically checking for the presence of a particular element on the screen, or a particular value in some output field. If that element is there, or if that value matches some specified and presumably desirable result, the “test” “passes”. But that doesn’t mean that there is no problem; a product can have plenty of problems even when it arrives a correct calculation, or drops the user on the requested page. You know this from your own experience. You’ve used iTunes, right? You’ve been on LinkedIn. You’ve tried to fix an indentation issue in Microsoft Word. Maybe not those things specifically, but I bet you’ve felt annoyed, frustrated, impatient, or baffled when try to use software to get something done. Quite possibly today.
And there’s the rub: rather than a means to gain experience with the product, most of these tools represent a means to check the product for specific conditions that can be specified easily. There’s a seductive story to be told about that: you can run those checks over and over, really quickly, and find a few shallow bugs when something changes in a bad way. Yet the tools are fussy; a change in the product can throw the script off even when it’s a desirable change. Addressing that requiring investigation, repair, and continuous maintenance, which takes time.
Then something even worse happens: testers, deliberately or not, don’t report the time it takes to deal with problems around the tool. Why wouldn’t they do that? One reason could be that management has spent a wad of money on the tool, and the vendor says it’s supposed to be simple. As a tester, to suggest that there are difficulties with the tool is to risk your reputation. Come on. It’s codeless. It’s supposed to be simple.
While all that is going on, the tool misses problems that would be easily apparent if testers were gaining real, human experience with the product. People find problems that tools miss because humans have a wonderful capacity to recognize problems that they have not been told about in advance. Humans bring rich sets of oracles to testing. Testers use their feelings, their social awareness, their memories, their tacit knowledge, their experience of the world, their familiarity with comparable or competitive products or features—all of these things, and more—to generate and apply oracles on the fly. But there’s less time available for gaining experience with the product and identifying unanticipated problems whenever the tester is repairing and maintaining the scripts.
Whether your testing tool is “codeless”, whether your input is delivered by a script, or input directly via keyboard and mouse, the means of entering data is usually one of the least significant aspects of a test.
What matters is not typing quickly, but the capacity for the tester to recognize problem that matter. If there’s no oracle, there’s no test. If there are weak oracles, there’s weak testing.
Further reading:
A Context-Driven Approach to Automation in Testing
Oracles from the Inside Out
Want to learn how to observe, analyze, and investigate software? Want to learn how to talk more clearly about testing with your clients and colleagues? Rapid Software Testing Explored, presented by me and set up for the daytime in North America and evenings in Europe and the UK, November 9-12. James Bach will be teaching Rapid Software Testing Managed November 17-20, and a flight of Rapid Software Testing Explored from December 8-11. There are also classes of Rapid Software Testing Applied coming up. See the full schedule, with links to register here.