DevelopsenseLogo

Voldemort, Part 2

The saga continues. As of this writing, OpenAI has noted the problem with David Mayer, putting it down to “a technical glitch“. As of this writing (around 2:00pm, Eastern Time, 2024-12-03), exactly the same issue persists with the name “Brian Hood”. (Here’s a link: https://chatgpt.com/share/674f5626-feb0-8009-8d82-c773b83416ae) But maybe there’s a hint as to why. A little more persuasion provides this: (and here’s a link: https://chatgpt.com/share/674f6095-f04c-8009-bdf3-daa747fec30c) ChatGPT’s guardrails are made of silly … Read more

Voldemort Syndrome

Since June 2023, James Bach and I have been collecting a set of “syndromes” associated with certain forms of AI — chatbots based on Large Language Models (LLMs) and Generative Pre-trained Transformers (GPTs). The most prominent of these, at this writing is OpenAI’s ChatGPT. Today we added a new syndrome: Voldemort Syndrome. Today LinkedIn (and much of the rest of the internet) lit up over the “The Man Who Shall … Read more

Bug of the Day: Facebook’s AI Layer Mangles Two Posts

Today I visited Facebook to post a notice of my upcoming trip to New Zealand. There will be three stops on the tour: Auckland (for Testers and Automation, Avoiding the Traps, February 17-19), Wellington (Testers and Automation, Avoiding the Traps, February 24-26), and Christchurch (Rapid Software Testing Explored, March 10-12). Facebook’s AI Layer (I’ll just call that FAIL) offered to turn it into an event. I accepted the offer, and … Read more

What Are We Thinking in the Age of AI?

At the Pacific Northwest Software Quality Conference in October 2024, I gave a keynote presentation titled “What Are We Thinking in the Age of AI?“ There’s a lot to think about, and for testers, there’s a lot to do. For one, we need to understand the basis for the “AI” claim. Any kind of software can be marketed as “AI”, since it’s doing something that (presumably) a human could do, … Read more

Test Tools Need Testing

In any testing situation, when you’re using a tool, you must understand its working principles. You must know what it can and cannot do. You must know how to configure it, and how to calibrate it, how to observe it in action, and how to adjust or repair it when it’s not working properly. To do THAT effectively, you must be able to recognize when your tool is not working. … Read more

Language Models

“Language models” is typically interpreted as a compound noun, something that models language. A model is an idea in your mind, an activity, or an object (such as a diagram, a list of words, a spreadsheet, a role play, a person, a toy, an equation, a demonstration, or a program…) that represents (literally! re-presents!) another idea, activity, or object (such as something complex that you need to work with or … Read more

The First Hurdle Heuristic

There is a testing techique that I often apply. I have recently decided to name it the First Hurdle Heuristic. The basic idea: get the product out of the starter’s blocks, and see how it performs given a relatively easy challenge. This heuristic can useful when you want to identify problems and risks immediately, or to determine whether a product might not be ready for use or for deeper testing. … Read more

Yes, We Still Need To Look. Carefully.

I very occasionally visit Xitter (pronounciation tip: it goes like the name of the President of the People’s Republic of China). The other day, Jason Huggins said Just in case you’re using a screen reader, that’s “I occasionally use the Tesseract OCR library for text recognition. I think that means I’m a senior machine learning engineer now, I guess.” I felt a little impish, but I also felt quite lazy. … Read more

Testing ChatGPT’s Programming “Skills”

With the current mania for AI-based systems, we’re finally starting to hear murmurs of moderation and the potential for risk. How do we test systems that incorporate an LLM? You already know how something about how to test LLM systems if you know how to test. Testing starts with doubt, and with a desire to look at things critically. The other day on LinkedIn, Paramjit Singh Aujla presented a problem … Read more

It’s Not About the Artifact

There’s a significant mistake that people might make when using LLMs to summarize a requirements document, or to produce a test report. LLMs aren’t all that great at summarizing. That’s definintely a problem, and it would be a mistake to trust an LLM’s summary without reviewing the original document. The bigger mistake is in believing that the output, the artifact, is the important thing. We might choose to share a … Read more