A Three-Day Rapid Software Testing Class
“Test automation” is a hot topic. If you’re a programmer, you’re probably thinking in terms of testing as a straightforward programming task. If you are in the testing field, then you are probably being pressured to automate your work. And now along comes AI, with its own set of promises, problems, and pressures. To deploy or apply tools responsibly, you must know how to look at them critically and avoid the traps. This class is designed to help you learn how to do that.
What About Tools?
Creating, applying, and maintaining powerful tools to support testing can be immensely valuable. If you want to do those things well, they can also present huge challenges. This three-day class is designed to help you help you meet the challenges, looking through the lenses of the Rapid Software Testing approach, starting with this foundational idea: testing is evaluating a product by learning about it through experiencing, exploring, and experimenting.
So how should you go about applying tools in testing? What about “automated testing”? Only certain specific aspects of testing can be automated, so which aspects should you focus on? What comes first? There are lots of expensive tools you could use, but there are also free tools, and maybe you can create some tools yourself. What should you NOT focus on? What traps must you try to avoid? These questions have always been important, but they’re now more critical than ever, because…
What About AI?
Over the last few years, AI—”artificial intelligence”—has gone from a set of emerging technologies to an industry obsession. Testers are being pressed to test products with AI features, and to use AI in their work. Meanwhile, there is uncertainty and controversy about what “AI” even means, since “AI” is fundamentally a marketing term, rather than an engineering term.
Whether designed for classification or prediction, or for generating text or code, all forms of AI have this in common: they are black boxes, imbued with magical behaviours and properties, whose behaviour is neither controlled, nor understood, nor otherwise known to be safe. A responsible approach requires us to know enough about them to be aware of the risks—which means we must know how to test them.
Much testing has traditionally been framed as formalized, procedurally structured test cases that check output against specific, prescribed results. That won’t work for GPT-based technologies whose output is non-deterministic by design. The Rapid Software Testing approach—and its approach to tools and automation—is designed to address that problem by helping you learn how to test for real.
In this three-day class, we’ll help you—and through you, your organization—to expand notions of testing and automation beyond GUI or API output checking. We will show you some creative uses of inexpensive tools to probe data. We’ll present tools that can help you visualize and report on test coverage. We’ll help you to analyze dimensions of cost and value, and to evaluate how different kinds of tools can help or hurt your testing. We’ll talk about how to recognize and learn from things that happen in the secret life of testing and automation. And finally, we’ll look critically and pragmatically at AI — both in terms of how to test it, and how we might apply it in testing work.
Maybe you’re a coder; maybe you’re not. This class is designed to help you either way. If you don’t write code, we won’t teach you how, but we will help you learn to work immediately and productively with people who do.
This class is not affiliated with any tool vendor. We’ll use tools in class, but we don’t teach you to operate any specific tool. We may demonstrate or mention particular tools during the class, but we have not accepted and will not accept any payment or benefit from commercial interests. (We may be biased, but we have not been bribed.) We will show you how to analyze vendor claims critically.
In short, this class is about the essence of tool strategy: vision and setting goals that make sense for you and your organization.
About the Authors
This class has been co-developed by James Bach and Michael Bolton, the authors of the Rapid Software Testing methodology.
Michael Bolton started in technology work as a programmer in 1988. Since then, he has worked in testing, program management, consulting, training, customer support, and documentation, developing and using tools all the way along.
James Bach is a developer-turned-tester involved with automation in testing since 1987. James’ team was among the first to use spreadsheets to implement data-driven and keyword-driven automation. One of his most popular articles ever was Test Automation Snake Oil, written about the exaggerations and lies told by test tool companies in the 1990’s — the same silliness common among tool vendors today.
Who Should Take This Training
Testers, Automation, and AI is for you if…
- you are a developer or a tester who is comfortable with writing code. We will help you see many ways of applying your coding skills to testing.
- you are a tester who does not code. We will help you understand possibilities and challenges of creating automation, and help you to learn to ask for what you need.
- you are a tester, developer, or manager responsible for testing an AI-based product. We will help you develop approaches to test capably and efficiently, with a focus on problems and business risk.
- you are a tester or developer under pressure to use AI in your work. We will help you to evaluate and apply AI-based tools in responsible ways.
- you are a quality coach or manager who is responsible for bringing in automation. We will help you create a strategy that avoids the common traps.
- you are from an organization that is struggling with existing automation. We will help you understand your situation and form a plan to improve the situation.
- you are a supporter of skilled, responsible testing, and you feel that you are under attack by technologists who think they can automate everything. We will help you defend your team and your work.
Goals of Testing, Automation, and AI
- To teach you how to plan and administer an effective and responsible strategy for applying automation and tools — including AI — to software testing.
- To help you avoid common traps that cause automation to be inefffective, or to suck the life and value out of testing.
Main Topics Covered
This class is taught Socratically, with exercises, discussions and illustrations of automation within the RST methodology. Class discussions and debate address students’ questions and specific needs. We all learn from the unique perspective that each student brings to the class.
Here are some of the topics we can cover based on time and on the needs of the participants:
- Checking vs. testing: what can be automated and what can’t
- How tools can help in primary testing vs. regression testing
- Exercises in test strategy and talking to a coder about automation
- Demonstrations of creative ways to apply tools to testing
- Eleven traps of automation in testing
- How testing AI is testing as usual
- Factors that make AI a special testing problem
- How you might apply AI in testing – and how AI can go wrong
- Strategies for interacting with AI safely and productively for testing work
- Exercises in evaluating output from GenAI
- “LLM Syndromes” — patterns of undesirable behaviour in generative AI
- Thinking critically about “ROI” — really cost, value, and risk — of various kinds of automation
- The secret life of an automator: hidden costs
- Thinking critically about the claims made by commercial tool companies
How This Class Compares To Our Other RST Classes
We talk a lot about test strategy and a little about automation in each of our classes. This class focuses on incorporating tools and automation into your test strategy. However, this class does not attempt to teach you the mechanics of how to code or how to test.