DevelopsenseLogo

Project Estimation and Black Swans (Part 4)

Over the last few posts, exploratory automation has suggested some interesting things about project dynamics and estimation. What might we learn from these little mathematical experiments?

The first thing we need to do is to emphasize the fact that we’re playing with numbers here. This exercise can’t offer any real construct validity, since an arbitrary chunk of time combined with a roll of the dice doesn’t match software development in all of its complex, messy, human glory. In a way, though, that doesn’t matter too much, since the goal of this exercise isn’t to prove anything in particular, but rather to raise interesting questions and to offer suggestions or hints about where we might look next.

The mathematics appears to support an idea touted over and over by Agile enthusiasts, humanists, and systems thinkers alike: make feedback rapid and frequent. The suggestion we might take from the last model—fewer tasks and shorter projects —is that the shorter and better-managed the project, the less the Black Swan has a chance to hurt you in any given project.

Another plausible idea that comes from the math is to avoid projects where the power-distribution law applies—projects where you’re vulnerable to Wasted Mornings and Lost Days. Stay away from projects in Taleb’s Fourth Quadrant, projects that contain high-impact, high-uncertainty tasks. To the greatest degree possible, stick with things that are reasonably predictable, so that the statistics of random and unpredicted events don’t wallop us quite so often. Stay within the realm of the known, “in Mediocristan” as Taleb would say. Head for the next island, rather than trying to navigate too far over the current horizon.

In all that, there’s a caveat. It is of the essence of Black Swan (or even a Black Cygnet) that it’s unpredicted and unpredictable. Ironically, the more successful we are at reducing uncertainty, the less often we’ll encounter rare events. The rarer the event, the less we know about it—and therefore, the less we’re aware of the range of its potential consequences. The less we know about the consequences, the less likely we are to know about how to manage them—certainly the less specifically we know how to manage them. In short, the more rare the event, the less information and experience we’ll have to help us to deal with it. One implication of this is that our Black Cygnets, in addition to adding time, having a chance of screwing up other things in ways that we don’t expect.

Some people would suggest that we eliminate variability and uncertainty and unpredictability. What a nice idea! By definition, uncertainty is the state of not knowing something; by definition, something that’s unpredictable can’t be predicted. Snowstorms happen (even in Britain!). Servers go down. Power cuts happen in India on a regular basis—on my last visit to India, I experienced three during class time, and three more in the evening in a two-day stay at a business class hotel. In North America, power cuts happen too—and because we’re not used to them, we aren’t prepared to deal with them. (To us they’re Black Swans, where to people who live in India, they’re Grey Swans.) Executives announce all-hands meetings, sometimes with dire messages. Computers crash. Post-It notes get jammed in the backup tape drive. People get sick, and if they’re healthy, their kids get sick. Trains are delayed. Bicycles get flat tires. And bugs are, by their nature, unpredicted.

So: we can’t predict the unpredictable. There is a viable alternative, though: we can expect the unpredictable, anticipate it to some degree, manage it as best we can, and learn from the experience. Embracing the unpredictable reminds me of the The Fundamental Regulator Paradox, from Jerry and Dani Weinberg’s General Principles of System Design which I’ve referred to before:

The task of a regulator is to eliminate variation, but this variation is the ultimate source of information about the quality of its work. Therefore, the better job a regulator does, the less information it gets about how to improve.

This suggests to me that, at least to a certain degree, we shouldn’t make our estimates too precise, our commitments too rigid, our processes too strict, our roles too closed, and our vision of the future too clear. When we do that, we reduce the flow of information coming in from outside the system, and that means that the system doesn’t develop an important quality: adaptability.

When I attended Jerry Weinberg’s Problem Solving Leadership workshop (PSL), one of the groups didn’t do so well on one of the problem-solving exercises. During the debrief, Jerry asked, “Why did you have such a problem with that? You handled a much harder problem yesterday.”

“The complexity of the problem screwed us up,” someone answered.

Jerry peered over the top of his glasses. He replied, “Your reaction to the complexity of the problem screwed you up.”

One of the great virtues of PSL is that it exposes you to lots of problems in a highly fault-tolerant environment. You get practice at dealing with surprises and behaviours that emerge from giving a group of people a moderately complex task, under conditions of uncertainty and time pressure. You get an opportunity to reflect on what happened, and you learn what you need to learn. That’s the intention of the Rapid Software Testing class, too: to expose people to problems, puzzles, and traps; to give people practice in recognizing and evading traps where possible; and to help them dealing with problems effectively.

As Jerry has frequently pointed out, plenty of organizations fall victim to bad luck, but much of the time, it’s not the bad luck that does them in; it’s how they react to the bad luck. A lot of organizations pillory themselves when they fail to foster environments in which everyone is empowered to solve problems. That leaves problem-solving in the hands of individuals, typically people with the title of “manager”. Yet at the moment a problem is recognized, the manager may not be available, or may not be the best person to deal with the problem. So, another reason that estimation fails is that organizations and individuals are not prepared or empowered to deal— mentally, politically, and emotionally—with surprises. The ensuing chaos and panic leaves them more vulnerable to Black Swans.

Next time, we’ll look at what all of this means for testing specifically, and for test estimation.

6 replies to “Project Estimation and Black Swans (Part 4)”

  1. Uncertainty and unpredictability is part of everything: see for instance http://en.wikipedia.org/wiki/Uncertainty_principle.
    We could, however, invent this: http://en.wikipedia.org/wiki/Transporter_(Star_Trek) than all our problem would be gone… wouldn’t that be something 😉

    Love the Monte Carlo angle (as a former Chemical Physicist) never thought of looking at it this way.. another door opened.. wow…!

    Great series Michael!

    Michael replies: Thanks, Ray.

    Reply
  2. Great series, Michael!

    While the scariest challenge for a large project is often the complexity introduced by having several interdependant (or independant) functionality, those projects can be broken down into mini-projects (sprints) so that early and rapid feedback can be obtained.

    Because I love hockey (and hey, the Leafs are winning the Cup this year), you win a game by winning each shift. By focusing on winning a shift, you end up in the end in a better position to win the whole game. A project is no different. If we can confirm the acceptability of components as they are built and tested, and that previous components are still acceptable, then we will deliver successful projects. Plus a Black Swan that hasn’t had months and months to linger in the system and feed and grow is much smaller and easier to deal with (this coming from a guy who has been chased by real swans :)).

    Reply
  3. _____________________________________________________________
    “The complexity of the problem screwed us up,” someone answered.

    Jerry peered over the top of his glasses. He replied, “Your reaction to the complexity of the problem screwed you up.”
    __________________________________________________________

    This is good for companies who don’t invest seriously in testing.

    Reply

Leave a Reply to Ray Oei Cancel reply