What Economists Get Wrong About Science and Technology

What's to come?
May 17 2012 2:54 PM

What Economists Get Wrong About Science and Technology

Trying to quantify research's effects on the economy always fails.

(Continued from Page 1)

To take another example of just how rudimentary economists’ understanding of the importance of technology to the economy is, look at a best-selling book that came out last year, The Great Stagnation, by prominent economist Tyler Cowen. The central argument of The Great Stagnation is that technological progress is slowing down, which will hurt economic growth.

But what is Cowen’s proxy for “technological progress”? He relies almost exclusively on a paper published in Technological Forecasting and Social Change by Jonathan Huebner, a physicist at the Naval Air Warfare Center. Huebner’s paper, “A Possible Declining Trend for Worldwide Innovation” relied on a simple methodology. He scanned a book called The History of Science and Technology: A Browser's Guide to the Great Discoveries, Inventions, and the People Who Made Them From the Dawn of Time to Today. It lists more than 7,000 “discoveries” by the year in which they were made. Huebner divided the number of discoveries made each decade since 1455 by the population of the world in that decade and made a graph. He then concluded that “worldwide innovation” might be declining.

So Cowen’s book—the “the most debated nonfiction book so far this year,” David Brooks wrote a month after it came out—ends up being based on a cursory tallying of discoveries from a high school-level reference book. But one “discovery” is not interchangeable with another. How do you compare the introduction of bifocal eyeglasses to the “invention of the Earth Simulator, a Japanese supercomputer” or Gutenberg’s moveable type to the polymerase chain reaction (all real examples from the book)? If you’re Huebner, or Cowen citing him, you say each counts as “one discovery.”

Advertisement

In his Nobel acceptance speech , Solow warned against such oversimplifciation, lamenting that economists are “trying too hard, pushing too far, asking ever more refined questions of limited data, over-fitting our models and over-interpreting the results.” But then he conceded that such overinterpretation is “probably inevitable and not especially to be regretted.”

He is right about the likely inevitability of this push. But his complacency (“not especially to be regretted”) comes from a comfortable lectern in Sweden. I can’t quantify for you the damage that such overzealousness causes without falling into the very trap I’m pointing out. It’s reasonable to ask the questions economists ask. But when they claim to have measured things that they haven’t—the importance of technology to the economy, the “rate of technological change,” or the bang per federal research buck—and policymakers believe them, it leads to bad policy.

Take the case of STAR METRICS, an effort by the Obama administration to “document the outcomes of science investments to the public”. STAR METRICS uses things like counting patents or how often a scientific paper has been cited to measure “the impact of federal science investment on scientific knowledge.” It’s easy to measure how many times a scientific paper has been cited. But make this of major bureaucratic importance, and you’ll get researchers citing things for the sake of citing them.

The notion that something is unquantifiable is alien to the mindset of the modern economist. Tell them it’s not quantifiable, and they will hear that it has not been quantified yet. This mistake matters, because economists and their business-school colleagues are very influential in the formulation of public policy. If economists rather than biologists decide what is good biology through supposedly quantitative, objective evaluations, you get worse biology. If economists decide what makes good physics, or good chemistry, you get worse physics, and worse chemistry. If you believe, as I do, that scientific progress, broadly speaking, is good for society, then making economists the arbiters of what is and isn’t useful ends up hurting everybody.

The greatest irony is that economists are ultimately no different from any other academics. Their argument for saying that we should give more money to study the economics of science is pretty much: “Trust us, what we’re doing is useful.” This is precisely the same argument with which they ding the physicists and biologists.

So really, what we need is to study the economics of the economics of science. Of course to know just how useful this is, we’ll need an economics of the economics of the economics of science. Where does it end?

Konstantin Kakaes is a Schwartz fellow at the New America Foundation and the author of the e-book The Pioneer Detectives: Did a Distant Spacecraft Prove Einstein and Newton Wrong? Follow him on Twitter.

  Slate Plus
Slate Picks
Dec. 19 2014 4:15 PM What Happened at Slate This Week? Staff writer Lily Hay Newman shares what stories intrigued her at the magazine this week.