Continuing Grig Gheorghiu’s questions from the Agile Testing mailing list…
I was just curious to know how you proceed in this case. I guess you teach your team to apply the rapid testing principles and techniques. Have you found that these principles/techniques are easily understood and applied? Are you using session-based testing? Have you still noticed regressions escaping out in the field? How many people do you usually have on your teams? I’d be interested in stuff like this…
Sometimes I am hired as a tester on short-term contracts. In the last couple of years, I’ve worked in teams varying in size from eight to three. In such cases, rapid and exploratory techniques got a lot more results, more quickly, than the existing scripted tests. The existing tests were usually dopey–stale, full of errors, and incapable of finding bugs. In some cases, the tests should have been automated; in most cases, they shouldn’t have been run at all, so low was their information value. The bugs that I found with exploration had never been covered by existing regression tests, and my experience was that bugs, once fixed, stayed fixed.
In one organization, despite management’s initial skepticism, I did lots of exploratory tests. At first, management was concerned that I was running far fewer tests than the scripted testers. However, I was finding far more bugs than the rest of the team, because they were wrestling with outdated and irrelevant scripted tests that narrowed the testers’ focus such that they weren’t spotting real problems. I was spending more time on bug investigation and reporting than on test design and execution, which had a big impact on test coverage. Since I was testing in a way that was (I believe) far more diversified and harsh than the scripts and the other testers, it wasn’t uncommon for me to find several bugs for each test idea.
Most of the tests that management required me to run had explicit data values associated with them. I found that the test data was all stale, but in updating it, I greatly expanded the range of values that were being used, and I found a lot more bugs. In the next test cycle, management was far more interested in an exploratory approach than they had been before. However, I persuaded them that, for that cycle, we needed to develop an automated oracle that would allow us to do more exploratory tests, more quickly. This turned out to be quite a powerful tool. However, most of the testers were contractors, my mandate at that organization didn’t involve training, and the organizational structure inhibits disruption, even if it’s positive.
A colleague is working at that organization, and reports that the default automation development strategy is to automate all existing tests. These are bad manual tests and would be even worse automated tests, so that’s an approach he is trying hard to change. That organization is big, slow, not agile, and not likely to change quickly.
In another organization for which I did active, day-to-day testing work, I was subsituting for a woman who was on maternity leave. I used mostly an exploratory approach, aided and abetted by Perl and Microsoft Excel. I used Fitnesse too, but found that it was of questionable value as a test tool (though I thought it had a lot of power as a requirements-with-examples tool and runnable examples). In particular, small numbers of tests were being run against the GUI using HTMLFixture–which for me was a high-maintenance, low-value approach to GUI testing. At that level, eyeballing stuff tended to be faster for most testing purposes.
When the woman returned, I stayed for a few more weeks, and in the last few weeks gave a class in rapid testing. There were three testers on the team–Tester A had lots of experience in mostly scripted approaches; Tester B had lots of experience in mostly exploratory approaches; and Tester C had little testing experience but a fair deal of training in programming. One of the tenets of rapid testing is the diversified team, so this balance suited well.
The team took the rapid testing practices and ran with them, instituting session-based testing and the testing dashboard (which is the Big Visible Chart approach to test cycle reporting). The project was a major rewrite of an in-production legacy application for which there were very few unit tests. Developers added those as they went. Tester C worked mostly with Ruby and WATIR; Tester A specialized in making sure that the important, risky test cases were followed; Tester B tended to build updated Fitnesse stories and work on test design for upcoming development. All the testers used exploratory sessions to help improve overall test design. Due to other duties, the test manager found it difficult to keep up with the session debriefing protocol, so the testers debriefed each other. Tester A would debrief Tester B; B would debrief C, and C would debrief A. They found lots of bugs. During the last iteration before release, they swapped session sheets from earlier sessions, such that A would perform the session performed by B and debriefed by C, and so on. The session sheets are detailed enough to guide regression tests, but open enough to stimulate creative diversity and new test ideas. Because of the effectiveness of the earlier testing, the new unit tests, and the robustness of the fixes, very few bugs were found at regression testing time, and I’m not aware of anything serious that went into release.
One of the key themes of the rapid testing philosophy is diversity–diversity of people, diversity of models, diversity of coverage, diversity of quality criteria, diversity of test approaches. So rather than depending on the uniform behaviour of a machine and of the program that runs on it, we use diversity to find lots of bugs.
Both James Bach and I teach the Rapid Software Testing course. In my own experience teaching the course, I’ve found that testers tend to grasp the skills easily and can begin to apply them immediately. Testers who were not motivated before the class become motivated; testers were become more motivated. By the end of the class, if it’s a corporate client, we often turn testers loose on their own products, and they very excitedly tend to find lots of new stuff. Many of our clients have adopted session-based test management, and are reporting that it’s powerful and effective, and the testers love it. Once they are encouraged to use their skills and intelligence instead of being controlled by a script, the testers tend to find more bugs with less effort, and they feel good about it. When management recognizes the value of this and encourages it, a positive feedback loop gets set up.
James has been teaching rapid testing to a large client for the last six years. In 2001, he and his brother Jon were hired to start an exploratory testing team in one group which exists to this day. The team (not Jon and James) used session-based test management and its metrics to demonstrate to management that, using exploratory testing, they found large numbers of bugs compared to previous approaches, which were mostly automated.
A blend of approaches worked better. A direct quote from the project manager: “We re-architected our firmware component, modifying nearly 80% of the code. This redesign involved no change in product features, just restructuring of the code. We thought this would be a situation in which our regression tests would really show their worth. They already covered the functionality to be tested so testing could start early with no delay for test development. But we chose to test this large code change by applying our scripted regression tests and exploratory testing in parallel.
“There were approximately 100 defects found. Exploratory tests found about 80% of them.”
—Michael B.