In a field still deeply shaped by arcane traditions and turf wars, when it comes to assessing what actually works — and which tidbits of information make it into the president’s daily brief — politics and power struggles among the 17 different American intelligence agencies are just as likely as security concerns to rule the day.
What if the intelligence community started to apply the emerging tools of social science to its work? What if it began testing and refining its predictions to determine which of its techniques yield useful information, and which should be discarded? Director of National Intelligence James R. Clapper, a retired Air Force general, has begun to invite this kind of thinking from the heart of the leviathan. He has asked outside experts to assess the intelligence community’s methods; at the same time, the government has begun directing some of its prodigious intelligence budget to academic research to explore pie-in-the-sky approaches to forecasting. All this effort is intended to transform America’s massive data-collection effort into much more accurate analysis and predictions.
“We still don’t really know what works and what doesn’t work,” said Baruch Fischhoff, a behavioral scientist at Carnegie Mellon University. “We say, put it to the test. The stakes are so high, how can you afford not to structure yourself for learning?”... Fischhoff and a who’s who of social scientists from psychology, business, and policy departments hope to foment a similar revolution in the intelligence world. Their most radical suggestion could have far-reaching effects and is already being slowly implemented: systematically judge the success rates of analyst predictions, and figure out which approaches actually work. Is intuition more useful than computer modeling? Is game theory better for some situations, and on-the-ground social analysis more accurate elsewhere?
Fischhoff envisions intelligence agencies, in real time, assigning teams with very different approaches to separately analyze real world situations, like the current state of play in Syria and the wider Arab world. Over the course of the next couple of years, researchers would track the success of different approaches to see which methods work best.
That remains only a proposal so far, but the Intelligence Advanced Research Projects Activity, or IARPA — a two-year-old agency that funds experimental ideas — is already trying a novel way to generate imaginative new steps to make predictions better. It is funding an unusual contest among academic researchers, a forecasting competition that will pit five teams using different methods of prediction against one another.
Of course, one can argue-- and indeed many of my fellow futurists will argue-- about what constitutes a "working" forecast, and some may go so far as to claim that even a completely wrong forecast can be useful under the right circumstances.
CBS MoneyWatch blogger Larry Swedroe glosses David Freedman's Wrong and Philip Tetlock's Expert Political Judgment to explain why expert advice goes wrong, and why experts have incentives to oversell their certainty and expertise:
Most of us want certainty, even when we know, logically, that it doesn’t exist. With investing, it’s a desire to believe that there’s someone who can protect us from bear markets and the devastating losses that can result. However, we’ve seen on numerous occasions that experts simply aren’t that expert. Of course, the next question is: “Why?”
Chris Mooney has a good article in Mother Jones on "the science of why we don't believe science"-- why and how we manage to believe sometimes-outlandish things in the face of contrary evidence.
[A]n array of new discoveries in psychology and neuroscience has further demonstrated how our preexisting beliefs, far more than any new facts, can skew our thoughts and even color what we consider our most dispassionate and logical conclusions. This tendency toward so-called "motivated reasoning" helps explain why we find groups so polarized over matters where the evidence is so unequivocal: climate change, vaccines, "death panels," the birthplace and religion of the president, and much else. It would seem that expecting people to be convinced by the facts flies in the face of, you know, the facts.
The theory of motivated reasoning builds on a key insight of modern neuroscience: Reasoning is actually suffused with emotion (or what researchers often call "affect"). Not only are the two inseparable, but our positive or negative feelings about people, things, and ideas arise much more rapidly than our conscious thoughts, in a matter of milliseconds—fast enough to detect with an EEG device, but long before we're aware of it. That shouldn't be surprising: Evolution required us to react very quickly to stimuli in our environment. It's a "basic human survival skill," explains political scientist Arthur Lupia of the University of Michigan. We push threatening information away; we pull friendly information close....
We're not driven only by emotions, of course—we also reason, deliberate. But reasoning comes later, works slower—and even then, it doesn't take place in an emotional vacuum. Rather, our quick-fire emotions can set us on a course of thinking that's highly biased, especially on topics we care a great deal about.
As he puts it, "We apply fight-or-flight reflexes not only to predators, but to data itself."
It's well worth reading, and it makes me wonder how to square insights like these against claims about the value of brainstorming, which is supposed to be good because its speed and social character forces you past the normal social and logical barriers that keep you from being creative.
I just heard that my "Futures 2.0" article has been recognized with an outstanding paper award:
Every year Emerald invites each journal’s Editorial Team to nominate what they believe has been that title’s Outstanding Paper and up to three Highly Commended Papers from the previous 12 months. Your paper has been included among these and I am pleased to inform you that your article entitled “Futures 2.0: rethinking the discipline” published in foresight has been chosen as an Outstanding Paper Award Winner at the Literati Network Awards for Excellence 2011.
The award winning papers are chosen following consultation amongst the journal’s Editorial Team, many of whom are eminent academics or managers. Your paper has been selected as it was one of the most impressive pieces of work the team has seen throughout 2010.
Nice. Now back to work.
For the last few months I've been interested in applications of mindfulness to futures thinking, and in developing means of making visible the long-term implications of decisions we face in the present. Recently I came across an article on "Mindfulness and Sustainable Behavior" that suggests the value of mindfulness in sustainability. From the abstract:
Ecopsychologists have suggested that mindful awareness of our interdependence with nature may not only help us regain our lost, ecologically embedded identity (Roszak, 1992) but may also help us behave more sustainably, closing the documented gap between proenvironmental attitudes and behaviors. We suggest more specifically that, in contemporary consumer culture with its dearth of proenvironmental norms and cues, mindful attentiveness may be necessary to develop sustainable habits. To explore the connection between mindfulness and sustainable behavior, we measured 100 adults attending a Midwestern sustainability expo on two mindfulness factors: acting with awareness and observing sensations. As predicted, acting with awareness was significantly positively correlated with self-reported sustainable behavior. This finding is consistent with the idea that, until sustainable decisions become the societal default, their enactment may depend on focused consideration of options and mindful behavior. In contrast, observing sensations did not predict behavior. This calls into question the notion that feeling connected to the world outside of ourselves is a precondition for sustainable action. We call for more research to further test the validity and generalizability of our findings.
Elise L. Amel, Christie M. Manning, Britain A. Scott, "Mindfulness and Sustainable Behavior: Pondering Attention and Awareness as Means for Increasing Green Behavior," Ecopsychology 1:1 (March 2009), 14-25. doi:10.1089/eco.2008.0005.
Police departments have long been in the data game, with such efforts as CompStat. But there's a new twist: They're not just using statistics to assess the past. Now they're trying to predict the future. In November 2009, the National Institute of Justice held a symposium on "predictive policing," to figure out the best ways to use statistical data to predict micro-trends in crime. The Los Angeles Police Department then won a $3 million grant from the Justice Department to finance a trial run in predictive methodology. (The grant, like the rest of the 2011 federal budget, is pending congressional approval.) Other police departments are giving predictive policing a shot, too, from Santa Cruz, which recruited a Santa Clara University professor to help rejigger their patrol patterns, to Chicago, which has created a new "criminal forecasting unit" to predict crime before it happens....
Predictive policing is based on the idea that some crime is random—but a lot isn't. For example, home burglaries are relatively predictable. When a house gets robbed, the likelihood of that house or houses near it getting robbed again spikes in the following days. Most people expect the exact opposite, figuring that if lightning strike once, it won't strike again. "This type of lightning does strike more than once," says [UCLA anthropology professor Jeffrey] Brantingham. Other crimes, like murder or rape, are harder to predict. They're more rare, for one thing, and the crime scene isn't always stationary, like a house. But they do tend to follow the same general pattern. If one gang member shoots another, for example, the likelihood of reprisal goes up....
Data-driven law enforcement shows that the criminal mind is not the dark, complex, and ultimately unknowable thing of Hollywood films. Instead, it's depressingly typical—driven by supply, demand, cost, and opportunity. "We have this perception that criminals are a breed apart, psychologically and behaviorally," says Brantingham. "That's not the case."
While you can only read the first two paragraphs of my Scientific American cubesats article on their Web site, another article of mine that came out today, "Thinking Big: Large Media, Creativity, and Collaboration [pdf]," is available in its entirety. Here's the opening:
My subject is the relationship between space and media. I focus on the role space and media play in supporting collaborative work and the opportunities that emerging technologies present to reshape collaborative-intensive endeavors for the space/media relationship.
We normally treat spaces and media as different things, but our interaction with such communicative media as newspapers, paintings, books, and maps has an important embodied, physical dimension to it.
To understand these space and media interactions I examine how large-scale media, such as wall-sized maps and floor-to-ceiling whiteboards, have a role in supporting collaboration. I have considered three examples of paper spaces: Buckminster Fuller’s World Game, emergency tabletop exercises, and expert workshops conducted by futurists. I note that these schematic visualizations invite participation, annotation, and reinterpretation by users as opposed to passive consumption. I also highlight the importance that physically navigating paper spaces supports the communication of what Sandy Pentland calls "honest signals," rapid negotiation, and thus the generation of common knowledge. Finally, I show how in the near-future we will be able to design digital tools that better support collaboration.
Actually, the whole issue is very interesting:
We open our third volume of PJIM articles under the rubric of mapping writ large. Every article deals, in some manner, with knowledge extraction and the power and informativeness of visual context. The first article invokes a mapping exercise that exploits publicly-published content (Twitter, Flickr) that reveals social networks, activity, and trends via the paradigm of topography. This is followed by how gameplay is analyzed through the mapping of player data utilizing meta-interfaces: interfaces that analyze usability respecting multiple categories of play. Our third article considers the mapping of consumer feedback through "qualitative synthesis"; again, getting the big-picture through visually organizing methods. We conclude with content-aspects of scale and media with an in-depth review of how large surfaces, paper or otherwise, provide informational context that small screen devices cannot emulate; the treatise should be required reading for every interface designer.
Incredibly, that last sentence seem to be talking about me. The editors must have been desperately tired when they put this thing to bed!
Inspired by the Google Labs Ngram suggesting that we've reached peak future, I decided to map the term "unintended consequences," and for good measure "unanticipated consequences." I've been interested in the history of unintended consequences for a while (here's a PDF of an article I've written on the subject), and found references in Poole's and other newspaper and magazine databases going back to the mid-1800s, but hadn't done a similar search for books.
Here are the rather unsurprising results ("unintended consequence" is the line in red, "unanticipated consequence" in blue):
It's hard to see, but the line is pretty flat until World War II. From that point "unintended consequence" takes off, and "unanticipated consequence" rises more slowly but still substantially. You can see a bigger version here.
One day I'll have to take on self-fulfilling prophecy. (Actually, that one has a really curious trajectory.)
Future related (more like, future mentioning) books have taken giant steps back since the beginning of the millennium. According to the data, “future books” peaked around the year 2000. The latest data available, 2008, demonstrates that the level of future mentioning books is back to where it was in the 1970s era. Could it be that there was structural change after the tech-wreck bubble (2001 recession) or even slightly before that period in anticipation of the crash?
Boucher speculates that "our current technological prowess may distract us from the future," and that "technology is a detriment to forward-looking thinkers." My own suspicion is that the peak around 2000 is an artifact of Y2K, and that use of the term is not going to continue to slide but will stabilize before too long.
From the New York Times, this piece about using analysis of unstructured data in automated trading:
Math-loving traders are using powerful computers to speed-read news reports, editorials, company Web sites, blog posts and even Twitter messages — and then letting the machines decide what it all means for the markets.
The development goes far beyond standard digital fare like most-read and e-mailed lists. In some cases, the computers are actually parsing writers' words, sentence structure, even the odd emoticon. A wink and a smile — ;) — for instance, just might mean things are looking up for the markets. Then, often without human intervention, the programs are interpreting that news and trading on it.
Given the volatility in the markets and concern that computerized trading exaggerates the ups and downs, the notion that Wall Street is engineering news-bots might sound like an investor's nightmare....
Many of the robo-readers look beyond the numbers and try to analyze market sentiment, that intuitive feeling investors have about the markets. Like the latest economic figures, news and social media buzz — "unstructured data," as it is known — can shift the mood from exuberance to despondency.
Tech-savvy traders have been scraping data out of new reports, press releases and corporate Web sites for years. But new, linguistics-based software goes well beyond that. News agencies like Bloomberg, Dow Jones and Thomson Reuters have adopted the idea, offering services that supposedly help their Wall Street customers sift through news automatically.
Some of these programs hardly seem like rocket science. Working with academics at Columbia University and the University of Notre Dame, Dow Jones compiled a dictionary of about 3,700 words that can signal changes in sentiment. Feel-good words include obvious ones like "ingenuity," "strength" and "winner." Feel-bad ones include "litigious," "colludes" and "risk."
One of the truisms about futures is that insights can come from all kinds of unusual places and unexpected corners of the world. This morning I ran across an illustration of this principle in blog form: an article about a set of 1931 predictions about 2011, via Abnormal Use: An Unreasonably Dangerous Products Liability Blog. Because of course when you think "the history of futures," the next thing that comes to mind is "products liability blogs that interview people on the latest developments in torts."
But to the predictions:
1931 was a long time ago, and few who live today can claim to remember it all too well.... It was a far different time culturally, socially, politically. The issue: What did the great minds of 1931 predict the rapidly approaching 2011 would be like?
There is actually an answer to that question.
Way back on September 13, 1931, The New York Times, founded in 1851, decided to celebrate its 80th anniversary by asking a few of the day's visionaries about their predictions of 2011 - 80 years in their future. Those assembled were big names for 1931: physician and Mayo Clinic co-founder W. J. Mayo, famed industrialist Henry Ford, anatomist and anthropologist Arthur Keith, physicist and Nobel laureate Arthur Compton, chemist Willis R. Whitney, physicist and Nobel laureate Robert Millikan, physicist and chemist Michael Pupin, and sociologist William F. Ogburn.
The most interesting piece, to my mind, is Ogburn's. Of course he got some stuff wrong, but the broad outlines of his vision were pretty spot on:
Technological progress, with its exponential law of increase, holds the key to the future. Labor displacement will proceed even to automatic factories. The magic of remote control will be commonplace. Humanity’s most versatile servant will be the electron tube. The communication and transportation inventions will smooth out regional differences and level us in some respects to uniformity. But the heterogeneity of material culture will mean specialists and languages that only specialists can understand. The countryside will be transformed by technology and farmers will be more like city folk. There will be fewer farmers, more wooded land with wild life. Personal property in mechanical conveniences will be greatly extended. Some of these will be needed to prop up the weak who will survive.
Inevitable technological progress and abundant natural resources yield a higher standard of living. Poverty will be eliminated and hunger as a driving force of revolution will not be a danger. Inequality of income and problems of social justice will remain. Crises of life will be met by insurance.
Not only are the big trends recognizable, but the specificities are interesting too: yes, there's no mention of the microchip, but it strikes me that "the electron tube" is the functional equivalent in his vision. It's also heartening because Ogburn (here's a pretty good biography) was noted for his work at Columbia on social trends, and argued for the growing importance of technology as a driver of human affairs and the future (obviously). He was elected first president of the Society for the History of Technology, but died before he could take office.
illustration from William F. Ogburn, You and Machines, via flickr
Some of his work was controversial-- his 1934 pamphlet You and Machines was banned on the grounds that it was too left-wing-- but the rest of his work was more mainstream, and as Rudy Volti argues (in a recent Technology and Culture article available behind the Project MUSE firewall), deals with issues that have been at the center of the history of technology and STS:
Ogburn's seminal work on technology was Social Change with Respect to Cultural and Original Nature... [which] introduces the concept that has been his greatest sociological legacy: cultural lag. As he explains: "The thesis is that the various parts of modern culture are not changing at the same rate, some parts are changing much more rapidly that others; and that since there is a correlation and interdependence of parts, a rapid change in one part of our culture requires readjustments through other changes in the various correlated parts of culture."
A 1950 edition of the book more explicitly lays out his theory of the "role of an advancing material culture in bringing about social change," and breaks it down into four parts:"
"Invention" is still given top billing, complemented by "accumulation" (the store of past inventions, which expands at an exponential rate) and the diffusion of inventions from other cultures. The fourth element of social change is "adjustment," the process through which lagging cultural elements catch up with the changes driven by invention, accumulation, and diffusion.
You can see Ogburn's model in his 1931 New York Times piece.
University of Wisconsin history professor Alfred McCoy is blogging about a project he and an international team of scholars has just completed, a series of scenarios on "the end of the American century." This is part of a larger project titled "U.S. Empire Project: Rise & Decline of American Global Power," which seems to be keeping alive Madison's rich tradition of radical scholarship.
It's not clear from the description of the project what kinds of methods they used to craft the four scenarios (or how they were chosen, etc.), but I hope to learn more about the project soon. From McCoy's post:
As a half-dozen European nations have discovered, imperial decline tends to have a remarkably demoralizing impact on a society, regularly bringing at least a generation of economic privation. As the economy cools, political temperatures rise, often sparking serious domestic unrest.
Available economic, educational, and military data indicate that, when it comes to U.S. global power, negative trends will aggregate rapidly by 2020 and are likely to reach a critical mass no later than 2030. The American Century, proclaimed so triumphantly at the start of World War II, will be tattered and fading by 2025, its eighth decade, and could be history by 2030....
Viewed historically, the question is not whether the United States will lose its unchallenged global power, but just how precipitous and wrenching the decline will be. In place of Washington's wishful thinking, let’s use the National Intelligence Council's own futuristic methodology to suggest four realistic scenarios for how, whether with a bang or a whimper, U.S. global power could reach its end in the 2020s (along with four accompanying assessments of just where we are today). The future scenarios include: economic decline, oil shock, military misadventure, and World War III. While these are hardly the only possibilities when it comes to American decline or even collapse, they offer a window into an onrushing future.
My former colleague Jess interrupted my day with a link to the marvelous, weird International Necronautical Society's "Declaration on the Notion of 'The Future'." Just imagine a collective biography of Gropius, Proust, Marinetti and Turing, run through the Fuck Yeah Menswear literary filter, and you'd be there. Or check it out for yourself.
The International Necronautical Society now entering its eleventh year, the First Committee has recently come under pressure to release, in keeping with the INS’s avant-garde demeanor, some kind of “statement” both assessing the organization’s achievements and prognosticating for its future. Both these impulses we reject.
As for the first: What would it mean to speak “of” the INS’s first ten years? To speak above them, overdub?...
[T]he concepts, presumptions, and ideologies embedded in this overstuffed and lazy meme—“The Future”—are in need of an urgent and vigorous demolition.
Oddly, I kind of agree with that last bit. The rest of it probably is best read by randomly selecting a passage, then choosing something else at random, William Burroughs-like (these are cut and pasted from different parts of the essay, not one long passage):
Contemporary intellectual follies, part two: neuroscience. Or rather, the glib wholesale transferral of the logic of neuroscience to the realm of culture.
[W]e advance not onto new ground but over old ground in new ways: more consciously, with deeper, more nuanced understanding.
[O]ur current age—call it “modernity,” “late capitalism,” or the seventh phase of pre-thetan consciousness, according to your disposition—has to be understood through the lens of catastrophe.
[T]he INS rejects the idea of the future, which is always the ultimate trump card of dominant socioeconomic narratives of progress. As our Chief Philosopher Simon Critchley has recently argued, the neoliberal versions of capitalism and democracy present themselves as an inevitability, a destiny to whom the future belongs. We resist this ideology of the future, in the name of the sheer radical potentiality of the past, and of the way the past can shape the creative impulses and imaginative landscape of the present. The future of thinking is its past, a thinking which turns its back on the future.
And—here’s the genius of Crash—out of this landscape rises the event: the überaccident that fails to take place, that occurs precisely because it doesn’t happen. Vaughan’s ultimate goal is to die in a head-on collision with Elizabeth Taylor at the precise moment of orgasm. He spends months planning it, down to the last, minutest detail (working out at what time she’ll be passing such and such a spot, the approach angle his car must take toward hers, and so on). But, disastrously, he gets it wrong and misses her car by inches; subsequently, while Taylor stands alone, frozen in ambulance light, touching her gloved hand to her throat, he drowns in his own blood. Vaughan, who has been in thousands of car crashes, has met with his first accident.
When, in 2006, a range of writers, scientists, artists, architects, and misc. were asked to contribute a sentence each to Hans Ulrich Obrist’s reader on the Future, J. G... Ballard confined himself to four words: "The Future is boring."
[T]his Declaration... should... be repeated, modified, distorted, and disseminated as the reader sees fit.
I know I've personally sent copies to all fourteen people who are interested in the article, but my piece on social scanning (cleverly subtitled, in Shakespearean fashion, "or, Finally a Use for Twitter") is formally, officially published in Futures. It's part of a special issue on "Global Mindset Change."
Odds are unless you're behind a university paywall you can't actually get to the article, but here's a draft that lays out the argument reasonably well.
The full citation is "Social scanning: Improving futures through Web 2.0; or, finally a use for twitter," Futures v. 42 no. 10 (December 2010), pp. 1222-1230.
Alex Pollock at the American Enterprise Institute writes about the role of models in economic science (or what "would be" a science "if it weren’t for the people") and financial decision-making. He argues that the widespread use of models tends to lead to their obsolecence:
Perversely, the more everyone believes the model, and the more everyone uses the same model, the more likely it is to induce changes in the market that make it cease to work.
In this cycle, the market and the regulators became enamored of the statistical treatments of risk, whereas the most important issue is always the human sources of risk. These human sources include short memories and the inclination to convince ourselves that we are experiencing "innovation" and "creativity," when all that is happening is a lowering of credit standards by new names.
As I understand his argument, there are a couple reasons for this. Some models-- ones that deal with very specific pieces of the future-- only work if they're obscure: if everyone "knows" that the price of magnesium is definitely going to rise, and everyone buys magnesium futures, the future price of magnesium changes. Models reinforce the belief that "this time it's different," and help people unlearn old, hard-won lessons. (As Pollock puts it elsewhere, one of the differences between science and finance is that scientists don't forget previous errors-- astronomers haven't gone back to geocentrism, and old ideas tend to die with old scientists-- while generational change in finance tends to wipe away wisdom, leaving only hubris and a belief in one's own youthful invincibility.) Models also tend to obscure the continued, lurking presence of uncertainty:
Because uncertainty is fundamental, sometimes disastrous mistakes will continue to be made by entrepreneurs, bankers, borrowers, central bankers, government agencies, politicians, and by the interaction of all of the above.
[Economics Frank] Knight wrote: "If the law of change is known, no [economic] profits can arise." Likewise: "If the law of change is known, no financial crises can arise." But in economics and finance, the law of change is never known. So change reflecting uncertainty goes on, bringing booms and busts periodically, and Adam Smith’s "progress of opulence" on the trend.
Have economists have tried to measure the impact of the popularity of models on markets? The Knight quote comes from his 1921 book Risk, Uncertainty, and Profit, and I have to assume that economists have tried to measure (a model, if you will) how widespread use of, say, a statistical model affects markets and either increases or decreases the reliability of that model. It seems to me that this would be one of those things that people would have tried to study, but I don't know enough about the field to know.
As I've mentioned a couple times, over the last couple years I've lost about fifty pounds, and am in the best physical condition of my entire life. For someone who grew up as a fat kid and fluctuated between being kind of overweight and really needing to take some serious weight off, and who had a stereotypical academic's contempt for all things seriously athletic, this is no small feat.
Of course, for me it was both a physical endeavor, and an extremely cerebral one: in order to get past the various things that had kept me from losing weight in the past, it was necessary for me to read a lot about nutrition and dieting, dive into the literature on obseity and satiety, and think about how what I'd learned from behavioral economics could be applied to weight loss.
At a certain point, I realized that the challenge of losing weight was a classic futures problem: complex, uncertain, requiring all kinds of near-term tradeoffs for long-term benefits, and hard to sustain. So could what I learned as a futurist help me lose weight? And could the experience of losing weight teach me anything about dealing with futures-related problems?
I think the answer to both is yes, and I've laid out my answers in an article that I just sent into one of those frighteningly efficient online editorial systems. We'll see if the piece is accepted-- it may be too first person to qualify as serious research-- but in the meantime I've put a copy of the draft online, and it's available as a PDF. The introduction is in the extended post.
Naturally, comments are welcome.
On Tuesday, 42% of registered voters took time out of their day to travel to their assigned polling location, wait in line, exchange niceties with a grumpy volunteer, and fill in some bubbles with a Sharpie. What did they receive in return?: a sticker and a 0% chance of changing the results of the election.
Political scientists have tried to calculate the probability that one vote will make a difference in a Presidential election. They estimate that the chances are roughly 1 in 10 million to 1 in 100 million, depending on your state. This does not give an individual much incentive to vote. In a YouGov survey, we asked respondents to estimate the same probability. “If you vote in 2012, what are the chances that your vote will determine the winner of the Presidential election?” Some of the responses are illuminating.
Not surprisingly, Americans vastly overestimate the chances that their vote will make a difference. Our median respondent felt that there is a 1 in 1000 chance that their vote could change the outcome of a Presidential election, missing the true chance by a factor of 10,000. However, this dramatic overestimation does not explain the prevalence of turnout, because those who actually vote know that this probability is low. Over 40% of regular voters know that the chances of a pivotal vote are less than 1 in a million. Amazingly, turnout is negatively correlated with the perceived chances that one vote will make a difference—meaning the less likely you are to think your vote will actually matter, the more likely you are to vote [emphasis added].
the presence of expertise about the future may encourage people to be less engaged in shaping their own futures. A study of popular responses to climate change suggests that a higher degree of confidence in the reality of climate change and the reliability of climate science can promote passivity and a sense that experts will deal with the problem, rather than inspire people to change their lives (Kellstedt et al., 2008; Swim et al., 2009). In another remarkable study, Jan Engelmann and colleagues used fMRI to observe the brains of people who received expert advice during a financial simulation. They found that subjects thought differently about their decisions when they received expert advice – even bad advice – than when they worked on their own. As the researchers put it, "one effect of expert advice is to ‘offload’ the calculation of value of decision options from the individual’s brain" (Engelmann et al., 2009). Put another way, "the advice made the brain switch off (at least to a great extent) processes required for financial decision-making" (Nir, 2009). In an era in which ordinary people play a bigger role in shaping the future, the prospect of an inverse relationship between how much confidence they place in expert opinion about complex problems, and how responsible they feel for acting to solve it, presents a substantial conundrum for futurists.
Clearly just giving people information about the future, or about the choices before them, and assuming they'll then act in a rational (or even straightforward, self-interested) manner doesn't quite work. We like to think we're rational, and we like to think other people are rational; but it's not quite so. As the voting example shows, sometimes that's a good thing; more often, though, it's not, and we need to better deal with that fact.
A few years ago, I coined the term Nunberg Error, in honor of Geoffrey Nunberg and his observation about our tendency when forecasting to overestimate the impact of technological change while underestimating social change. It's time now to coin a new term, just in time for the avalanche of punditry around the midterms: the Tetlock Gambit.
Briefly, the Tetlock Gambit (named in honor of Philip Tetlock, author of the fantastic book Expert Political Judgment) is a kind of pundit's hedge: it's an outrageous prediction, made in the hope of a big payoff if it comes true, and with the knowledge that there'll be no penalty if it's false. So you can't be a true believer in, say, the idea that we'll use nanotechnology to rewire our brains, and forecast the same; you must make such a prediction self-consciously and cynically.
The example that inspires all this? Penn professor Justin Wolfers:
The Democrats will retain control of the House and the Senate. And I’m the only person in D.C. insightful enough to make this brave forecast.
If I’m right? Well you can bet that I’ll beat the drums loudly and tell everyone in sight that I called it. I’ll blog it all week. I’ll write an op-ed explaining my insights. I’ll go on to Jon Stewart’s show to explain the fine art of psephology. Hopefully you’ll be calling me the Nouriel Roubini of political punditry. I’ll go on to a new life of lucrative speaking engagements and big book advances, while I beat back my coterie of devoted followers.
And if I’m wrong? We both know there won’t be any real consequences. I’ll be sure to sell some clever story. You know, there was weather on election day (hot or cold, wet or dry — it all works!) and this messed with turnout. Or perhaps, This Time Was Different, and my excellent forecast was knocked off course by our first black president, by rising cellphone penetration or a candidate who may not be a witch. I’ll remind you how I nailed previous elections. (Follow the links, you’ll see I’m doing it already!) I’ll bluster and use long words like sociotropic, or perhaps heteroskedastic. And I’ll remind you that my first name is Professor, and I went to a prestigious school. More to the point, if I’m wrong, I’m sure we’ll all have forgotten by the time the 2012 election rolls around. Shhhh… I won’t tell if you won’t.
As he confesses at the end of his prediction,
[Y]es, my forecast is more about the marketplace for punditry than it is about this election. I’m influenced strongly by my Penn colleague Philip Tetlock, who has spent decades pointing out just how bad expert political judgment is. Given these market failures, I would be a fool not to go for the gold.
It was inevitable that someone would read Tetlock as a manual for how to succeed as a pundit, rather than as a caution against trusting pundits, much as Michael Lewis' Liar's Poker was read by some college students as a how-to manual for success on Wall Street, not a caution against going into finance.
No wait, someone has already done it: I did, in my "Evil Futurists' Guide to World Domination."
This is one of the most brilliant pieces of political advertisement, and a fabulously understated yet realistic view of the future, I've ever seen. James Fallows is right.
As I understand it, he's also right that "if you know anything about the Chinese economy, the actual analytical content here is hilariously wrong."
The ad has the Chinese official saying that America collapsed because, in the midst of a recession, it relied on (a) government stimulus spending, (b) big changes in its health care systems, and (c) public intervention in major industries -- all of which of course, have been crucial parts of China's (successful) anti-recession policy.
Still, as a piece of agitprop it's very smooth. Dangerously so. (As an especially smart friend of mine puts it, "It's like Firefly meets 60 Minutes.")
I write about people, technology, and the worlds they make.
I'm a senior consultant at Strategic Business Insights, a Menlo Park, CA consulting and research firm. I also have two academic appointments: I'm a visitor at the Peace Innovation Lab at Stanford University, and an Associate Fellow at Oxford University's Saïd Business School.
My book on contemplative computing, The Distraction Addiction, will be published by Little, Brown and Company in 2013. (It will also appear in Dutch, Russian, Spanish, Chinese and Korean in 2013 and 2014.)
PUBLISHED IN 2012
PUBLISHED IN 2011
PUBLISHED IN 2010
PUBLISHED IN 2009