Moneybox

Economists Use Shoddy Data

All too often, economists’ and policymakers’ theories are based on terrible statistics.

How can we get better data?
How can we get better data?

iStockphoto/Thinkstock.

Economists are a smart bunch, and in San Diego over the weekend for the annual meeting of the American Economic Association lots of impressive analytical chops were on display. Sometimes, though, the analysis seems to badly overreach the data. A well-attended panel discussion about whether economics is hopelessly riven by ideological disagreement, for example, proceeded even though the IGM Chicago survey it’s based on isn’t even close to being a statistically valid survey. And the issue recurs even outside the realm of such “fun” discussions and infects much more sober research. No grand theory is going to be much better than the data it’s based on, and the underlying quality is often distressingly poor.

A worse-attended, dramatically lower-charisma discussion occurred the following day about a dully titled paper on “Cyclical Variation in Productivity Using the ATUS” by Michael Burda, Daniel Mamermesh, and Jay Stewart.

The point of the paper is that theories of the macroeconomy often depend crucially on ideas about the direction of productivity in the labor force. And to know how productive workers are on a per-hour basis, you need to know how many hours people work. This, in turn, is a messy business. The data traditionally come from two sources. One, the Current Employment Survey (CES), is a survey of employers and thus misses the self-employed and can’t tell how many people are working multiple jobs. For the vast majority of its history the CES also missed supervisory workers. The other, the Current Population Survey (CPS), asks individuals and is thus more comprehensive. But asking people what they did last week often generates inaccurate answers.

A better solution is to use the American Time Use Survey (ATUS), which asks people to keep a real-time diary, ensuring that numbers add up to 24 hours in a day and reducing memory error.

ATUS data has been around for a while, but as with the CES and CPS, its hours-worked series isn’t really used for anything despite being generally considered more accurate. What Burda, Hamermesh, and Stewart did was recalculate hourly productivity data with ATUS hours worked as the denominator. The scale of the change here is not enormous—the data series differ, but not all that much—but one interesting thing did happen. According to the standard data, productivity used to fall during recessions but has risen during recent recessions. According to the ATUS data, productivity still falls during recessions. The authors disclaim any ambitions to become macroeconomic theorists but did draw attention to a key point. The traditional view had been that during recessions employers engage in “labor hoarding,” hanging on to more workers than necessary in order to avoid wasteful firing and rehiring. More recently, hoarding seemed to have gone away. Firms instead took recessions as opportunities to trim the fat, firing low-quality workers and raising average productivity. This apparent switch in firm behavior has been thought by many to relate to the rise of “jobless recoveries.” Except now it looks like the change may never have happened.

So a small fussy point ends up undermining a big-picture argument. And it happens not just in academia, but in policy circles as well. The Obama administration, for example, took office in January of 2009 and began planning a recovery strategy based on the theory that the economy had shrunk at an alarming 3.8 percent annualized rate. After several rounds of revisions, the correct figure seems to have been an 8.9 percent annual rate. That’s on a par with confusing a mild recession with a gigantic one.

Even the smartest theoretical or policy work is basically valueless if it’s based on flawed or misleading data. And yet in the hierarchy of the profession, actually wrangling data is a relatively low-status undertaking compared with building elaborate edifices on even the shakiest of foundations. Harry Truman famously wished out loud for a “one-handed economist” who would offer concrete advice with less hedging and trimming, but, actually, overconfidence about bad data is the bigger problem. Absent laboratory conditions, it’s very difficult to obtain precise measurements of key figures, leaving us all too often with big theories based on poor numbers. And when theories move out of the ivory tower and into the policy realm, the problem gets worse as pressure for timeliness and relevance escalates.

To an extent, these are problems we’ll never overcome. But they underscore the importance of the unglamorous work of America’s government statistical agencies. Finding ways to obtain and use better data are ultimately key to developing better theories and better policies. The ATUS didn’t exist at all until 2003, which is why it hasn’t traditionally been used for productivity or other secondary statistics. But it appears to give not just slightly more precise estimates of how much people work but a totally different picture of large-scale economic trends and the nature of the business cycle. Neither economists nor policymakers talk nearly enough about ways to further improve the data available. Dull as government statistics sound to most people, ultimately they’re the basis for choices that make huge differences to the lives of millions.