DevelopsenseLogo

You Can’t Inspect Quality Into a Product

Over the last 35 years in the software business, I’ve heard the expression “You can’t test quality into a product.” To support that statement, I’ve seen this quote — or a part of it — repeated from time to time: Inspection does not improve the quality, nor guarantee quality. Inspection is too late. The quality, good or bad, is already in the product. As Harold F. Dodge said, “You can … Read more

Four Frames for Testing, Part 7: Critical Distance

There’s a popular trope in the software development world these days that suggests that everybody on the team is responsible for testing. With that idea in mind, some people take an extreme position: since everyone tests, no one needs dedicated testers any more. Developers can do all the testing; or business analysts can do all the testing; or the customers can do all the testing. Then there’s another notion (which, … Read more

Testing is Not Quality; Quality is Not Testing

Please remember: there’s a big difference between quality and testing; and so there’s a big difference between a quality strategy and a testing strategy. Understand the Nature of Quality The essence of quality is value to people. A quality strategy is a set of guiding ideas for building a product or service, in order to achieve the goal(s) of providing value to people. To develop a successful product, the people … Read more

Once Again: The Created Artifact Isn’t the Point; The Creative Process Is

(Yet another post that started on LinkedIn…) There’s lots of advice out there on how to use Large Language Models to generate test ideas, test data, or test cases. Everything I’ve done and seen myself suggests that the test ideas from LLMs are pretty generic, banal, and uninspired. Considering how LLMs work, this is unsurprising. The majority of LLMs are trained on testing material from the Web, where the overwhelming … Read more

It’s Not About the Artifact

There’s a significant mistake that people might make when using LLMs to summarize a requirements document, or to produce a test report. LLMs aren’t all that great at summarizing. That’s definintely a problem, and it would be a mistake to trust an LLM’s summary without reviewing the original document. The bigger mistake is in believing that the output, the artifact, is the important thing. We might choose to share a … Read more

For the Interviewers: Evaluating Testing Skill

A prototype of this post originally appeared on LinkedIn. Today I was using Microsoft Word, and for the first time I took a look at a feature that’s probably been there for a long while. Also today, there’s at least one more LinkedIn poll with an interview question — apparently aimed at testers — on a fairly trivial aspect of Java programming. Questions of that nature might reasonable if the … Read more

When the Developers Are the Users

This is a lightly-edited version of a repost on LinkedIn. The original post contained a photo of a conference talk. The presenter was a dude in a Spiderman costume. (I’ve always wondered how many Spiderman costumes we’d see at meetings of doctors, or journalists, or theoretical physicists. But I digress.) The screen displayed a slide “Everyone cares about User Experience, but no one cares about Developer Experience.” Spiderman outfit notwithstanding, … Read more

A Reply to “Running a crowd-sourced experiment on using LLMs for testing”

This post and the ones that follow represent an expansion on a thread I started on LinkedIn. On September 30, 2023, Vipul Kocher — a fellow with whom I have been on friendly terms since I visited his company and his family for lunch in Delhi about 15 years ago — posted a kind of testing challenge on LinkedIn. I strongly encourage you to read the post. I’ll begin by … Read more

“Should Sound Like” vs. “Should Be”

Yet another post plucked and adapted from the walled garden of LinkedIn “What the large language models are good at is saying what an answer should sound like, which is different from what an answer should be.” —Rodney Brooks, https://spectrum.ieee.org/gpt-4-calm-down Note for testers and their clients: the problem that Rodney Brooks identifies with large language models applies to lots of test procedures and test results as well. People often have … Read more

Testing is Socially Challenging

This post has been brewing for a while, but a LinkedIn conversation today reminded me to put it in the bottle and ship it. Testing is socially challenging. There’s a double meaning there. One meaning is that testing involves challenging the product and our beliefs about it, in a social context. The other meaning is that probing the product and people’s beliefs about it can sometimes be uncomfortable for everyone … Read more