The third project was wildly speculative. Capecchi was trying to show that it was possible to make a specific, targeted change to a gene in a mouse's DNA. It is hard to overstate how ambitious this was, especially back in 1980: A mouse's DNA contains as much information as 70 or 80 large encyclopedia volumes. Capecchi wanted to perform the equivalent of finding and changing a single sentence in one of those volumes—but using a procedure performed on a molecular scale. His idea was to produce a sort of doppelganger gene, one similar to the one he wanted to change. He would inject the doppelganger into a mouse's cell and somehow get the gene to find its partner, kick it out of the DNA strand and replace it. Success was not only uncertain but highly improbable.
The NIH decided that Capecchi's plans sounded like science fiction. They downgraded his application and strongly advised him to drop the speculative third project. However, they did agree to fund his application on the basis of the other two solid, results-oriented projects. (Things could have been worse: At about the same time, over in the U.K., the Medical Research Council flatly rejected an application from Martin Evans to attempt a similar trick. Two research agencies are better than one, however messy that might seem, precisely because they will fund a greater variety of projects.)
What did Capecchi do? He took the NIH's money, and, ignoring their admonitions, he poured almost all of it into his risky gene-targeting project. It was, he recalls, a big gamble. If he hadn't been able to show strong enough initial results in the three-to-five-year time scale demanded by the NIH, they would have cut off his funding. Without their seal of approval, he might have found it hard to get funding from elsewhere. His career would have been severely set back, his research assistants looking for other work. His laboratory might not have survived.
In 2007, Mario Capecchi was awarded the Nobel Prize for Medicine for this work on mouse genes. As the NIH's expert panel had earlier admitted, when agreeing to renew his funding: "We are glad you didn't follow our advice." (Capecchi's autobiographical essay is on the Nobel Prize website.)
The moral of Capecchi's story is not that we should admire stubborn geniuses, although we should. It is that we shouldn't require stubbornness as a quality in our geniuses. How many vital scientific or technological advances have foundered, not because their developers lacked insight, but because they simply didn't have Mario Capecchi's extraordinarily defiant character?
But before lambasting the NIH for their lack of imagination, suppose for a moment that you and I sat down with a blank sheet of paper and tried to design a system for doling out huge amounts of public money—taxpayers' money—to scientific researchers. That's quite a responsibility. We would want to see a clear project description, of course. We'd want some expert opinion to check that each project was scientifically sound, that it wasn't a wild goose chase. We'd want to know that either the applicant or another respected researcher had taken the first steps along this particular investigative journey and obtained some preliminary results. And we would want to check in on progress every few years.
We would just have just designed the sensible, rational system that tried to stop Mario Capecchi working on mouse genes.
The NIH's expert-led, results-based, rational evaluation of projects is a sensible way to produce a steady stream of high-quality, can't-go-wrong scientific research. But it is exactly the wrong way to fund lottery-ticket projects that offer a small probability of a revolutionary breakthrough. It is a funding system designed to avoid risks—one that puts more emphasis on forestalling failure than achieving success. Such an attitude to funding is understandable in any organization, especially one funded by taxpayers. But it takes too few risks. It isn't right to expect a Mario Capecchi to risk his career on a life-saving idea because the rest of us don't want to take a chance.
Fortunately, the NIH model isn't the only approach to funding medical research. The Howard Hughes Medical Institute, a large charitable medical research organization set up by the eccentric billionaire, has an "investigator" program which explicitly urges "researchers to take risks, to explore unproven avenues, to embrace the unknown—even if it means uncertainty or the chance of failure." Indeed, one of the main difficulties in attracting HHMI funding is convincing the institute that the research is sufficiently uncertain.
The HHMI also backs people rather than specific projects, figuring that this allows scientists the flexibility to adapt as new information becomes available and pursue whatever avenues of research open up, without having to justify themselves to a panel of experts. It does not demand a detailed research project—it prefers to see the sketch of the idea, alongside an example of the applicant's best recent research. Investigators are sometimes astonished that the funding appears to be handed out with too few strings attached.
The HHMI does ask for results, eventually, but allows much more flexibility about what "results" actually are—after all, there was no specific project in the first place. If the HHMI sees convincing signs of effort, funding is automatically renewed for another five years; it is only after 10 years without results that HHMI funding is withdrawn—and even then, gradually rather than abruptly, allowing researchers to seek out alternatives rather than sacking their staff or closing down their laboratories.
This sounds like a great approach when Mario Capecchi is at the forefront of our minds. But is the HHMI system really superior? Maybe it leads to too many costly failures. Maybe it allows researchers to relax too much, safe in the knowledge that funding is all but assured.
Maybe. But three economists, Pierre Azoulay, Gustavo Manso, and Joshua Graff Zivin, have picked apart the data from the NIH and HHMI programs to provide a rigorous evaluation of how much important science emerges from the two contrasting approaches. They carefully matched HHMI investigators with the very best NIH-funded scientists: those who had received rare scholarships and those who had received NIH "MERIT" awards, which, like other NIH grants, fund specific projects, but which are more generous and are aimed only at the most outstanding researchers. They also used a statistical technique to select high-caliber NIH researchers with a near-identical track record to HHMI investigators.