DevelopsenseLogo

Bing Chat, the Evaluate Function, and the Wolfram Alpha Plugin

When you read or even scan this post, you’re likely to say something like “Holy hopscotch, that’s a long post.”  And you’ll be right. And you might be inclined to say “…and it’s boring.” And depending on your perspective, you’ll be right about that, too. It certainly has taken a significant amount of time to edit and to narrate. If you’re interested in risk associated with Large Language Models and … Read more

A Reply to “Running a crowd-sourced experiment on using LLMs for testing” — Part 2: Analysis

Vipul Kocher is a fellow whom I have known for a long time. I think we met in North America in the mid 2000s. I know I visited his company in Noida, New Delhi about 15 years ago, and spoke with his testers for an hour or so. On that occasion, I also visited his family and had a memorable home-cooked meal, followed by a mad dash in a sport … Read more

A Reply to “Running a crowd-sourced experiment on using LLMs for testing”

This post and the ones that follow represent an expansion on a thread I started on LinkedIn. On September 30, 2023, Vipul Kocher — a fellow with whom I have been on friendly terms since I visited his company and his family for lunch in Delhi about 15 years ago — posted a kind of testing challenge on LinkedIn. I strongly encourage you to read the post. I’ll begin by … Read more

Reliably Unreliable

ChatGPT may produce inaccurate information about people, places, or facts.  https://chat.openai.com/ Testing work comes with a problem: the more we test, the more we learn. The more we learn, the more we recognize other things to learn. When we investigate a problem, there’s a non-zero probability that we’ll encounter other problems — which in turn leads to the discovery of more problems. In the Rapid Software Testing namespace, we’ve come … Read more

Experience Report: Using ChatGPT to Generate and Analyze Text

In the previous post, I described ChatGPT as being a generator of bullshit. Some might say that’s unfair to ChatGPT, because bullshit is “speech intended to persuade without regard for truth”. ChatGPT, being neither more nor less than code, has no intentions of its own; nor does it have a concept of truth, never mind regard for it, and therefore can’t be held respsonsible for the text that it produces. … Read more

Response to “Testing: Bolt-on AI”

A little while back, on LinkedIn, Jason Arbon posted a long article that included a lengthy conversation he had with ChatGPT.  The teaser for the article is “A little humility and curiosity will keep you one step ahead of the competition — and the machines.”  The title of the article is “Testing: Bolt-on AI” and in Jason’s post linking to it, I’m tagged, along with my Rapid Software Testing colleague … Read more

ChatGPT and a Math Puzzle

The other day on LinkedIn, Wayne Roseberry posted a puzzle that (he says) ChatGPT solved correctly. Here’s the puzzle. “Bob and Alice have a rectangular backyard that has an area of 2500 square feet.Every morning, Alice walks the 50 feet from her back door to the neighbor to pick up their laundry as well. What is the longest straight line that can bisect Bob and Alice’s back yard?” According to … Read more

Winding Up

After 20 years of working together to develop the Rapid Software Testing approach, James Bach and I have decided that — improbable as it may seem — it’s time to wrap it all up. Perhaps this will be a surprise to our followers in the community, but we now must confront what we previously thought was unimaginable: recent developments in technology have, for all intents and purposes, made testing obsolete. … Read more