The Undercover Economist

The Wisdom of Crowds?

A single economic forecast is usually wrong. But groups of economic forecasts are often just as mistaken. Why?

When people discover that I am an economist, they rarely ask me for my views on subjects that economists know a bit about—such as how to respond to climate change or pay less at a supermarket. Instead, they ask me what will happen to the economy.

Why is it that people won’t take “I don’t really know” for an answer? People often chuckle about the forecasting skills of economists, but after the snickers die down, they keep demanding more forecasts. Is there any reason to believe that economists can deliver?

One answer can be gleaned from previous forecasts. Back in 1995, economist and Financial Times columnist John Kay examined the record of 34 British forecasters from 1987 to 1994, and he concluded that they were birds of a feather. They tended to make similar forecasts, and then the economy disobligingly did something else, with economic growth usually falling outside the range of all 34 forecasters.

Perhaps forecasting technology has moved on since then, or the British economy is unusually unpredictable? To find out, I repeated John’s exercise with forecasts for economic growth for the United Kingdom, United States, and Eurozone over the years 2002-08, diligently collected at the end of each previous year by Consensus Economics.

The results are an eerie echo of John Kay’s: For 2004, for example, 20 out of 21 nongovernmental forecasts made in December 2003 were too pessimistic about economic growth in the United Kingdom. The Pollyannas of the U.K. treasury were more optimistic than almost any commercial forecaster and closer getting their forecast right. So, one might suspect that systematic pessimism is to blame.

But, no, in 2005, the economy grew more slowly than 19 out of 21 forecasters had expected at the end of the previous year. The Pollyannas of the U.K. treasury were yet again more optimistic than anyone and thus more wrong than anyone. A year later, all but one of the forecasters were too pessimistic again. Yet at the end of 2001, three-quarters of the forecasters were too optimistic about 2002.

2003 is an interesting anomaly: the one year for which the average U.K. forecast turned out to be close to reality but also the year where the spread between highest and lowest forecast was widest. The rare occasion that the forecasters couldn’t agree happened to be the occasion on which they were (on average) right.

Recent U.S. forecasters have done a little better: The spread of forecasts is tighter, and the outcome sometimes falls within that spread. Still, five out of six were too pessimistic about 2003, almost everyone was too pessimistic about 2002, three-quarters were too optimistic about 2005, and nearly nine-tenths too optimistic about 2006. Perversely, the best quantitative end-of-year forecasts were made in December 2006, despite the fact that the credit crunch materialized eight months later to the surprise of almost everybody.

In the Eurozone, forecasting over the past few years has been so wayward that it is kindest to say no more.

The new data seem to confirm Kay’s original finding that economic forecasters all tend to be wrong in the same way. Their incentives to flock together are obvious enough.

What is less clear is why the flight of the flock is so often thought to augur much—but then, some astrologers are also profitably employed.

The curious thing is that forecasters often have something useful to say, but it is rarely conveyed in the numerical forecast itself on which so much attention is lavished. For instance, in December 2006, forecasters were warning of the risks of an oil price spike, a sharp rise in the cost of credit, and a dollar crash. The quantitative forecasts are usually wrong and not terribly helpful when right, but forecasters do say things worth hearing, if only you can work out when to listen.