Future Tense

MBA buzzwords are hurting scientific progress.

How MBA-speak is hurting the scientific academy.

So we can easily agree that seeking a better understanding of the dynamics of the innovation ecosystem is desirable. The point I want to make is that misunderstanding those dynamics through fatuous quantification mechanisms is in fact damaging to the system itself. Talking about the “pace of technological change” is only the tip of the spear of MBA-speak that is stabbing the academy.

Think for a second about the atomic bomb. There’s a big, just gigantic, technological change. But when did it happen? We can argue that the “speed of change” was really slow until July 16, 1945, when the first atomic bomb exploded in New Mexico—the speed of change was super-fast on that particular day. This would be silly. So we have to average out the “speed of change” somehow. But over what timescale? So many industrial inputs (precision machining, computing and the like) and basic scientific insights (being able to calculate the likelihood that a neutron hitting an atomic nucleus will cause it to split in two) went into building the bomb that it’s unclear where to start.

The claim that some forms of knowledge are fundamentally resistant to quantification (memorably described as a “bitch-goddess” by Carl Bridenbaugh in this essay) is anathema to policymakers today, who’ve emerged from business schools and management consultancies convinced that Excel macros will let them give reality to the shadows on the walls of Plato’s cave.

Who wouldn’t be for, as you describe, a better understanding of the “complex context of science in its social settings”? What worries me is that when policymakers set out to do this, they don’t do it, as you suggest would be desirable, in a thoughtful way, but rather try and quantify things as much as possible, because numbers carry with them an aura of objectivity.

The Trinity Test.
The Trinity Test, an atomic bomb exploding in New Mexico on July 16, 1945

Photograph courtesy Los Alamos National Laboratory.

A better understanding of the social context of science is worthwhile from the point of view of simple inquiry—it’d be a good thing to know more about. But if we’re worried about public policy, I’m not sure it’s as central as it’s often made out to be.

The problem with setting out to assess science’s contributions to societal problems is that it’s exceedingly rare for science to be the limiting factor; usually, it’s the political system.

Francis Collins asserts that the “translational process itself [is] a scientific problem amenable to innovation.” This is wrong, I think. It’s a problem amenable to innovation, but it’s not a scientific one; for instance, see the challenges of translating existing climate-change knowledge into action. Hence your point on earthquake mitigation—if what we’re worried about is reducing people’s vulnerability to earthquakes, it’s not better seismology we need; it’s better zoning and urban planning. But unlike Brian Tucker, the seismologist you profiled in Nature a few years ago, who is “really interested in doing something to improve earthquake safety,” most seismologists just want to understand earthquakes better because of their own selfish curiosity. It’s a category error to conflate what NGOs should be doing (what Tucker, in fact, does) with what seismologists should be doing.

I don’t think that how much money we spend on the seismology-type stuff is the only variable, or even the most important one. Maybe we should be spending less money on scientific research, but also with less oversight. The pressure to demonstrate utility is what drives us down a path of short-term optimization, as in the case of mice and biomedical research. It’s exactly the attempt to link scientific advances with societal needs that drives us down this sort of garden path. And of course once you’re spending as much money on scientific research as the federal government does, there’s enormous pressure to demonstrate such a broader benefit. And so you get a lot of contortions, with every geneticist trying to claim their research will somehow help the “war on cancer.” (If I could start a war on “wars-on,” I would; it’s such a silly and counterproductive trope.)

Speaking of war, let’s turn to the Department of Defense. I think your idea that the DOD has been “incredibly effective” while NIH has failed to deliver on its promise is problematic. There’s plentiful waste on the DOD side. Have the hundreds of billions spent on missile defense made us any safer? They haven’t. Has the DOD’s research establishment even been well-positioned to make soldiers (or, to be proper, if disturbingly anodyne, warfighters) safer and more effective, let alone society at large? In many ways, no. The procurement apparatus of the military, in the famous formulation (I’m not sure whose) tends to fight the last war.

Again, we come to the issue that solving societal problems is often a nonscientific task. We have to consider both the vested interests who lobby our government to protect the status quo and legitimate concerns about allowing people the liberty to do what they want, even if we know it’s bad. We have a much better understanding today than 30 years ago of how cigarettes cause cancer. But we knew it then. Getting people to quit smoking, though, is a different story.

So I guess what I’m arguing is that just because science sometimes delivers societal benefits (as it indisputably does) doesn’t mean we should constantly be pressing for more of it. Does this mean that policy can’t, or shouldn’t, shape science? It doesn’t. The artemisinin example you give is interesting, but I’m not sure what the take-home message is. I’ll agree that drug development in the United States is a deeply flawed process. But this has less to do with how basic science is funded and more to do with runaway consolidation in the pharmaceutical industry and a flawed intellectual-property regime.

I couldn’t agree more that the dynamics of the ecosystem of innovation matter. I think that ecosystem is in many ways dysfunctional at present. But I worry that attempts to increase the benefits society gets will end up making things worse rather than better. At a certain level, the prescription is simple: Find the smartest people you can and get them to do good work that is, in a meaningful sense, relevant. But I don’t know that there’s a policy prescription that gets you there. Efforts to do so, like Collins’ translation institute, trip over themselves ideologically, to “avoid taking on any projects of immediate commercial interest.” Commercial interest and public interest, basic and applied science, are all mixed together. In today’s political climate, the contortions that the government must help private industry, but never compete with it, end up verging on the nonsensical. Will good work come out of Collins’ translational institute? Surely it will. But does the basic premise make much sense? Not to me.