Hey, Wait A Minute

Damn the Slam PAM Plan!

Canceling the Pentagon’s futures market is cowardly and dumb.

If you go to the Policy Analysis Market, or PAM, Web site, you’ll find nothing but a blank page. You don’t even get one of those “This page cannot be found” screens. Thanks to the publicity firestorm started by Sens. Byron Dorgan and Ron Wyden on Monday, the Defense Department’s plan to run an experimental futures market to forecast conditions in the Middle East is dead. And we’re all worse off as a result.

That may be hard to believe, given the unanimity with which PAM has been denounced. In that Monday press conference, Wyden and Dorgan called the plan “harebrained,” “offensive,” and “useless.” The press generally followed their lead, acting astonished that anyone could ever have imagined that such a ridiculous scheme would work. But for all the grandstanding and moral posturing, the most important question has been absent from the discussion: Would the market have worked? In other words, would it have improved American intelligence capabilities and enhanced national security?

All the evidence suggests that it would have. As Daniel Gross and Brendan Koerner mentioned in their Slate pieces yesterday, similar markets have proven surprisingly good at predicting the outcome of presidential elections, box-office results, and even the fall of Saddam Hussein. We now have more than a decade of empirical results to back up the idea that “decision markets” can work, in addition to the reams of data on the efficacy of traditional futures markets, such as those for corn or interest rates. (There’s evidence, for instance, that orange-juice futures do a better job of predicting the weather in Florida than traditional weather forecasts do.)

Even when traders are not necessarily experts, their collective judgment is often remarkably accurate because markets are efficient at uncovering and aggregating diverse pieces of information. And it doesn’t seem to matter much what markets are being used to predict. Whether the outcome depends on irrational actors (box-office results), animal behavior (horse races), a blend of irrational and rational motives (elections), or a seemingly random interaction between weather and soil (orange-juice crops), market predictions often outperform those of even the best-informed expert. Given that, it’s reasonable to think a prediction market might add something to our understanding of the future of the Middle East.

PAM might also have been effective because traders in a market have no incentive other than making the right prediction—that is, there are no bureaucratic or political factors influencing their decisions—so they eliminate many of the hurdles that limit the flow of information within organizations. That’s especially important in the case of the intelligence community because we know that, for example, in the case of 9/11 there was lots of valuable and relevant information available before the attack took place. What was missing was a mechanism for aggregating that information in a single place. A well-designed market might have served as that mechanism.

Sen. Wyden dismissed PAM as a “fairy tale” and suggested that DARPA would be better off putting its money into “real world” intelligence. But the dichotomy here is a false one. No one has suggested replacing traditional intelligence-gathering with a market. PAM was intended to be simply another way of collecting information. And in any case, all the information that traders would be trading on would presumably be from the “real world.” Otherwise it’d be hard to see how they could make accurate bets.

Of course, the real attack on PAM had nothing to do with how effective it would or would not be. The real problem with it, Wyden and Dorgan made clear, was that it was “offensive” and “morally wrong” to wager on potential catastrophes. Let’s admit there’s something ghoulish about betting on an assassination attempt. But let’s also admit that U.S. government analysts ask themselves every day the exact same questions that PAM traders would have been asking: How stable is the government of Jordan? How likely is it the House of Saud will fall? Will Mahmoud Abbas still be head of the P.A. in 2004? How many more casualties will the United States take in Iraq? If it isn’t immoral for the U.S. government to be asking these questions, it’s hard to see how it’s immoral for people outside the U.S. government to ask them. Especially since the point of having traders ask the questions was to gather information to prevent catastrophes from happening.

Perhaps what’s immoral, though, is that PAM would allow people to make money from predicting catastrophe. But CIA analysts don’t volunteer their services. We pay them to predict catastrophes. Is that morally wrong? We also pay informants—like the guy who turned in Odai and Qusai—for valuable information. Again, are we wrong to do so?

Or consider our regular economy. The entire business of a life-insurance company is based on betting on when people are going to die (with a traditional life-insurance policy, the company is betting you’ll die later than you think you will, while with an annuity it’s betting you’ll die sooner). There may be something viscerally unappealing about this, but most of us understand that it’s necessary. This is, in some sense, what markets often do: harness amorality to improve the collective good. If the price of getting better intelligence is having our sensibilities bruised, we should be willing to pay it. Wyden suggested that getting involved in an “academic discourse” about whether prediction markets were accurate was missing the point because it was just “morally wrong” to use them. But surely if PAM would have made America’s national security stronger, it’d be morally wrong not to use it.

There were, to be sure, problems that PAM would have had to overcome. As Daniel Gross pointed out yesterday, if the market was accurate, and the Defense Department acted on its predictions to stop, say, the assassination of the king of Jordan, it would make the traders’ predictions false and thereby destroy the incentives to make good predictions. A well-designed market would probably have to account for such U.S. interventions, presumably by making the wagers conditional on U.S. action (or, alternatively, traders would start to factor the possibility of U.S. action into their prices). But of course this would be a problem only if the market was in fact making good predictions. Had PAM ever become a fully liquid market, it would also have probably had the same problems other markets sometimes have, like bubbles and gaming. But you don’t have to believe that markets work perfectly to believe that they work well.

As for the much-bruited idea that PAM would have allowed terrorists to bet on themselves, thereby letting them profit from their own misdeeds, this was a pure red herring. A terrorist betting on his own impending action would, in effect, be informing on himself. This seems unlikely at best. But, if it did happen, it would be a good thing, since intelligence agencies generally welcome informants. Would we prefer it if the would-be terrorist kept the knowledge of his future actions secret?

In any case, for all the hype about “terrorism futures,” the vast majority of the “wagers” that PAM traders would have been making would have been on more mundane questions, like the future economic growth of Jordan or how strong Syria’s military was. At its core, PAM was not meant to tell us what Hamas was going to do next week. It was meant to give us a better sense of the economic health, civil stability, and military readiness of Middle Eastern nations, with an eye on what that might mean for U.S. interests in the region. That seems like something about which the aggregated judgment of policy analysts and would-be Middle Eastern experts (the kind of people who would likely have been trading on PAM) would have had something valuable to say. Wyden and Dorgan scornfully compared the Policy Analysis Market to a “betting parlor.” It’s a telling (and troubling) comparison, because we know one thing about betting markets: They’re eerily good at predicting the future.