DevelopsenseLogo

A Reply to “Running a crowd-sourced experiment on using LLMs for testing” — Part 2: Analysis

Vipul Kocher is a fellow whom I have known for a long time. I think we met in North America in the mid 2000s. I know I visited his company in Noida, New Delhi about 15 years ago, and spoke with his testers for an hour or so. On that occasion, I also visited his family and had a memorable home-cooked meal, followed by a mad dash in a sport … Read more

A Reply to “Running a crowd-sourced experiment on using LLMs for testing”

This post and the ones that follow represent an expansion on a thread I started on LinkedIn. On September 30, 2023, Vipul Kocher — a fellow with whom I have been on friendly terms since I visited his company and his family for lunch in Delhi about 15 years ago — posted a kind of testing challenge on LinkedIn. I strongly encourage you to read the post. I’ll begin by … Read more

Reliably Unreliable

ChatGPT may produce inaccurate information about people, places, or facts.  https://chat.openai.com/ Testing work comes with a problem: the more we test, the more we learn. The more we learn, the more we recognize other things to learn. When we investigate a problem, there’s a non-zero probability that we’ll encounter other problems — which in turn leads to the discovery of more problems. In the Rapid Software Testing namespace, we’ve come … Read more

Lousy Solutions to Problems We Create

Code is created by people. HTML elements are code, which is created by people. As part of developing those elements, people can tag them with attributes (including but not limited to “id”) that make them easy to find, via tools, for testing purposes. This can be done deliberately and consciously, or automatically as part of the creation of the element. When that doesn’t happen, testers and developers have difficulty locating … Read more

Calculating ROI

In preparation for our automation and management classes, James Bach and I are currently analysing ROI calculators (“Here’s what you can save over N years using our product!”) and we’re going deep on reverse-engineering one from a prominent tool vendor. Opaque formulas; undefined and unexplained terminology; unnamed and inexplicable constants; weird, inexplicable coefficients in the calculation; poor testability; hilarious bugs sitting right there on the surface; impossible conclusions due to … Read more

Experience Report: Using ChatGPT to Generate and Analyze Text

In the previous post, I described ChatGPT as being a generator of bullshit. Some might say that’s unfair to ChatGPT, because bullshit is “speech intended to persuade without regard for truth”. ChatGPT, being neither more nor less than code, has no intentions of its own; nor does it have a concept of truth, never mind regard for it, and therefore can’t be held respsonsible for the text that it produces. … Read more

Response to “Testing: Bolt-on AI”

A little while back, on LinkedIn, Jason Arbon posted a long article that included a lengthy conversation he had with ChatGPT.  The teaser for the article is “A little humility and curiosity will keep you one step ahead of the competition — and the machines.”  The title of the article is “Testing: Bolt-on AI” and in Jason’s post linking to it, I’m tagged, along with my Rapid Software Testing colleague … Read more

ChatGPT and a Math Puzzle

The other day on LinkedIn, Wayne Roseberry posted a puzzle that (he says) ChatGPT solved correctly. Here’s the puzzle. “Bob and Alice have a rectangular backyard that has an area of 2500 square feet.Every morning, Alice walks the 50 feet from her back door to the neighbor to pick up their laundry as well. What is the longest straight line that can bisect Bob and Alice’s back yard?” According to … Read more

Evaluating the Chatbots

This ChatGPT getting dumber? This paper raises the question; this blog post questions the conclusions; and this article has more to say. That’s not a very useful question, because “dumber” is not exactly a property of ChatGPT (or anything else). It’s a set of relationships between ChatGPT’s behaviour; people’s notion(s) of dumb and smart; and the context. Evaluating that requires a complex set of perspectives, values, and social judgements. For … Read more

“Should Sound Like” vs. “Should Be”

Yet another post plucked and adapted from the walled garden of LinkedIn “What the large language models are good at is saying what an answer should sound like, which is different from what an answer should be.” —Rodney Brooks, https://spectrum.ieee.org/gpt-4-calm-down Note for testers and their clients: the problem that Rodney Brooks identifies with large language models applies to lots of test procedures and test results as well. People often have … Read more