Repeatabiity and Adaptability

Arianna Huffington, on the Daily Show, suggested that one point of the blog was to work out nascent ideas without being overly concerned about completeness. There are a bunch of things that are rattling around at the moment, from all kinds of different sources. One is The Sciences of the Artificial, by Herbert Simon—a book that James Bach has been recommending to me practically forever. I’m finally getting around to reading it. Another is a set of radio programs on the CBC‘s Ideas series, called How to Think About Science, about which much more in a post Real Soon Now. That series prompted me to purchase The Structure of Scientific Revolutions and Leviathan and the Air-Pump. I’ve heard from many people about the former. The latter is a book on the controversy between Robert Boyle and Thomas Hobbes—a controversy that had aftershocks in the Science Wars of the 1990s, and which echoes in the controversy over schools of software testing. Another is the book The Omnivore’s Dilemma, which is a general systems book (with a couple of good points about testing, too) cleverly disguised as a natural history of four meals.

Having a little time off can create a lot more work. This stuff has me sufficiently excited that I’m finding it difficult to accomplish any of the mandated writing I have to do, but we’ll let the fieldstones fall where they may.

Tectonic forces are building up due to friction between two plates. On the one hand, many people in the software development and testing business seem obsessed with the need to reduce variation, to focus on repeatability, to confirm the things that we know. These are admirable and important goals, and we’d have a tough time if we ignored them. On the other hand, many people—I’m one—while honouring the importance of the confirmatory mode, are more concerned with the need to examine adaptability, to recognize the limitations of repeatability, and to explore with the goal of extending the boundaries of what we know.

I’ll have more to say about all this in the days ahead (let’s face it; it’ll probably take years), but today I was browsing General Principles of System Design (formerly titled On The Design of Stable Systems), and found this gem, The Fundamental Regulator Paradox:

The task of a regulator is to eliminate variation, but this variation is the ultimate source of information about the quality of its work. Therefore, the better job a regulator does, the less information it gets about how to improve.

Put into more memorable words, the Fundamental Regulator Paradox says: Better regulation today risks worse regulation tomorrow.

This sums up why you can’t get through to anyone important at the big telcos by phoning; why they don’t publish their phone numbers online, or why they bury them in the Web site; why the corporate immune system exists. It helps to explain how the very largest financial institutions proved to be highly vulnerable to huge losses. It helps to explain how governments that suppress dissent inevitably fall. These systems don’t want to be disturbed, and the easiest way to do that s to reject information of all kinds. Mark Federman wrote a wonderful paper called Listening to the Voice of the Customer, which is exactly about all that stuff.

Recently I was polled on my opinion about a one-day power outage that happened in our neighbourhood. The poll questions and the format for answering them extremely restrictive, designed to simplify rich stories and detailed information into data—groupings of responses ranging from very satisfied (7) to very dissatisfied (1). I’m sure that this made the poll results more digestible for the utility company’s managers, but by the time everything had been sifted into a one-to-seven value, any human dimension that might compel action would have been removed. That would include stories about seniors stranded without heat, lights, water, or elevators in 17-storey apartment buildings on the coldest day of the year, or the owners of small grocery stores that lost thousands of dollars because the fridges warmed up for lack of electricity before the building cooled down for lack of heat. And because the poll was designed to limit variability in the answers, I grew sufficiently frustrated to give up a few questions in. Thus the utility company ended up hearing nothing at all from me, just as The Fundamental Regulator Paradox would suggest.

So here are my questions: is your testing priority to make things repeatable, or is it to elicit new information? Is your job to reproduce known results, or to test for adaptability? And one that’s a little more sobering, perhaps: to what extent does your current testing process reject information, rather than seeking it out? Do you let your program “speak its mind” to you by interviewing it and having a conversation with it? Or do you have a set of standard multiple choice questions that you want it to answer in a highly constrained way?

2 replies to “Repeatabiity and Adaptability”

  1. Recently I ran across “Quantum Psychology” by Wilson in which he has a brief section talking about Shannon’s information theory. According Shannon’s famous equation ( the more predicable the message, the less new information it provides. An example Wilson gives is a speech by a politician. Rarely are we surprised by the content of such a speech.

    Regression testing in my experience is a case where the information is predictable. If there is a a lot of code churn there can be many failures, but you know what is in the test suite so you know what is available to fail, so to speak.

    On the other hand, when I’ve setup interesting transaction or data flow testing, I’ve found a lot of surprises throughout the development cycle.

    Confirming what we know is comfortable. Devs writing happy path unit tests, regression suites that rarely find new bugs, well worn checklists, etc.

    Trying to find surprising information is hard work! I find few test professionals have the skills to design tests that put an application through the ringer in a way that respects project resources and customer expectations to find such valuable, rare information of the type you seem to be wondering about.

  2. Hello Michael,
    Interesting post you have once again.
    Starting with the blog-example where you don’t have to concern about completeness. Continue with the regulator-paradox were in my opinion control by information and therefore completeness of information by minimizing is posed against fully offering information.
    And ending with the questions I often have to deal with: Control the test process and adapt the test process.

    What I often see is a situation were only time is given to work on one of these two forces. Management wants you to control and business wants you to adapt. If management starts listening to business you also have to adapt. The pitfall is often that project management doesn’t check what impact it will have so you will never be able to do a good job, unless you are able to define the borders of both powers. I think this can be done as you said by interviewing.

    I think it shouldn’t only be you who’s interviewing. You might led the project management also approach the program and by convincing them that they are also not working with multiple choice lists and instead using questions you can convince them that full control (regulation) is impossible.

    Perhaps this might create better accepted boundaries were you can work in and create acceptance of your skills instead only counting the scripts.


Leave a Comment