In a field still deeply shaped by arcane traditions and turf wars, when it comes to assessing what actually works — and which tidbits of information make it into the president’s daily brief — politics and power struggles among the 17 different American intelligence agencies are just as likely as security concerns to rule the day.
What if the intelligence community started to apply the emerging tools of social science to its work? What if it began testing and refining its predictions to determine which of its techniques yield useful information, and which should be discarded? Director of National Intelligence James R. Clapper, a retired Air Force general, has begun to invite this kind of thinking from the heart of the leviathan. He has asked outside experts to assess the intelligence community’s methods; at the same time, the government has begun directing some of its prodigious intelligence budget to academic research to explore pie-in-the-sky approaches to forecasting. All this effort is intended to transform America’s massive data-collection effort into much more accurate analysis and predictions.
“We still don’t really know what works and what doesn’t work,” said Baruch Fischhoff, a behavioral scientist at Carnegie Mellon University. “We say, put it to the test. The stakes are so high, how can you afford not to structure yourself for learning?”... Fischhoff and a who’s who of social scientists from psychology, business, and policy departments hope to foment a similar revolution in the intelligence world. Their most radical suggestion could have far-reaching effects and is already being slowly implemented: systematically judge the success rates of analyst predictions, and figure out which approaches actually work. Is intuition more useful than computer modeling? Is game theory better for some situations, and on-the-ground social analysis more accurate elsewhere?
Fischhoff envisions intelligence agencies, in real time, assigning teams with very different approaches to separately analyze real world situations, like the current state of play in Syria and the wider Arab world. Over the course of the next couple of years, researchers would track the success of different approaches to see which methods work best.
That remains only a proposal so far, but the Intelligence Advanced Research Projects Activity, or IARPA — a two-year-old agency that funds experimental ideas — is already trying a novel way to generate imaginative new steps to make predictions better. It is funding an unusual contest among academic researchers, a forecasting competition that will pit five teams using different methods of prediction against one another.
Of course, one can argue-- and indeed many of my fellow futurists will argue-- about what constitutes a "working" forecast, and some may go so far as to claim that even a completely wrong forecast can be useful under the right circumstances.