You need to become a better futurist. You need to up your game at least to “prosumer” level.

Why? Because every decision you make, whether about your organization’s strategy or your own career, is based on a view of the future, an implicit or explicit story about where things are headed. And that view is flawed. Any view of the future is flawed.

The question is not whether we can eliminate the flaws and predict the future. We can’t. The question is whether we can hone our vision to at least tell us something useful about the future, about what is more likely to happen, what is more likely to make a difference and what would be the leading indicators of which way it will go.

So, one reason you might consult with a futurist or hear one speak is to learn what he or she thinks about the future — but the other reason is to hear how he or she thinks about it. Because you need to think about your future and your organization's in a much more fine-grained way.

How do you do that? I could just say, “Here’s how I do it.” Because it’s not magic. I mean, I do have three crystal balls right here on my desk, but they are nonoperational. They are here to remind me of the key to good thinking about the future: insight. That is, it's not so much about numbers and extrapolating trends but about creating narratives based on thoughtful insight, then testing the narratives by asking questions such as: What would have to be true for this to happen? What would be the signs that we are headed in this direction? Is this happening anywhere yet?

The evidence

Some people have done the research on forecasting. They found numbers of people who could forecast better than our intelligence services, without any special access to secret information, and then studied how they work. Let’s take a brief look at what they found and look at how it might apply to our thinking about health care.

After the shock of 9/11, U.S. intelligence services did a lot of introspection, asking themselves how they could improve their work. One of the things this resulted in was a crowd-sourcing experiment, the Good Judgment Project, sponsored starting in 2011 by the Intelligence Advanced Research Projects Activity. A multidisciplinary team from the University of California, Berkeley, and the University of Pennsylvania put together a process in which random people would sign up to be forecasters and be rated on their success.

The forecasters could be anybody: professional statisticians, bus drivers, professors, checkers at Wal-Mart — it didn’t matter. They would be given a set of questions to choose from to make predictions on, things like, “What are the chances that in the next six months there will be a coup in Liberia?” The questions were time limited, framed so they would have definite answers and be neither too easy nor impossible. The forecasters would make a prediction based on whatever information they could find. They could update the prediction as often as they liked, but each update was treated as a new prediction.

The result? Over a year, most people did pretty well and better than random. A small group, though, did astonishingly better. That group was given their own discussion forum and followed for another year as the experiment continued. Over that second year, some “reverted to the mean,” doing no better than most people. Their great score in the first year seemed to have been mostly luck. But some of them did spectacularly better than most the second year as well.

'Superforecasters'

As the experiment continued, some people stayed in that most successful group year after year. These were the “superforecasters.” The researchers studied them to see if there was anything about their success that could be emulated. It turned out on close inspection that they did have certain ways of working in common — and they were not what everyone might expect. These methods form the meat of the new book Superforecasters: The Art and Science of Prediction by Philip Tetlock and Daniel Gardner.

When I read the book, I found that the superforecasters' methods turned out to be not at all surprising. In fact, they were a pretty good match for the way I think about the future. There is a lot to these methods, but let me try to summarize them. Principally, they were not about mathematically extrapolating trends. They were about building narratives.

The best forecasters read widely even when they weren’t working on a problem. Their working model of the world was well-constructed with a wide variety of inputs. They would cast a wide net, often wider than the inside experts in that field might cast. (I study, for instance, among many other things, why shipping traffic is increasing past the Cape of Good Hope and why growing numbers of barges are being laid up on the Mississippi.)

In considering a problem, they would formally or informally construct a scenario, a story about something happening, putting in place what would have to happen for it to be true. Then they would sift all the data they could find to prove or disprove the story or to modify it.

The problem with statistics

The problem with quant-based trend-following techniques of forecasting is that the devil is not in the details. The devil is in the hidden assumptions you are making when you decide which trend to extrapolate and which numbers reliably encapsulate the problem you are researching.