Between the early 1970s and the late 1990s, the long-term survival rate of children with leukemia skyrocketed from less than 20 percent to around 80 percent. Over this relatively short period, many children presumed to be dying instead ended up living. As remarkable as the surge is the reason for it. Dr. Steve Sallan, the chief of staff at the Dana-Farber Cancer Institute in Boston, recently told me that not a single newly discovered drug was involved. Nobody invented some magical genetic therapy either. So what changed?
Too often, medical advances get advertised as the work of swashbuckling doctors and patients who take big risks against big odds and seize miraculous results with new treatments taken straight from the lab. That narrative is misleading. As with pediatric leukemia, the reality often is far less dramatic but no less impressive, and therein lie critical lessons for patients with many chronic, tough-to-treat diseases like asthma, attention-deficit disorder, and obesity.
The leukemia doctors saved lives simply by refining the use of old-school drugs like doxorubicin and asparaginase. Over the course of almost a dozen clinical trials, they painstakingly varied the doses of these older drugs, evaluated the benefit of continuing chemotherapy in some kids who appeared to be in remission, and tested the benefit of injecting drugs directly into the spinal column. The doctors gradually learned what drug combinations, doses, and sites of injection worked best. And they kept at it. With each small innovation, survival rates crept forward a bit—a few percent here and there every couple of years—and over decades those persistent baby steps added up to a giant leap.
Today, we're far more likely to hear exaggerated tales of breakthrough new drugs, aggressively marketed and hyped. But it's the leukemia story that's the historical norm. Back in the early 20th century, for example—decades before the discovery of antibiotics—tuberculosis mortality fell almost 70 percent (subscription required) due largely to careful studies of nutrition and hygiene. From 1980 to 2000, death from heart disease plummeted an astonishing 50 percent, almost entirely from the use of existing medicines and surgical treatments. These were gradually tweaked, like leukemia therapy, in response to scores of incremental studies. During the past 30 years, mortality from diabetes in men also has decreased by half, largely due to improved use of flu vaccines, smoking reduction, and possibly aspirin use—but not a new blockbuster drug.
Of course, new drugs can sometimes change everything. Example: Genentech's novel angiogenesis inhibitor Lucentis, which restored vision in patients (subscription required) with macular degeneration. But such successes are incredibly rare and even in cases like Lucentis, often unforeseen. (James Watson, the co-discoverer of DNA, imprudently predicted in 1998 that angiogenesis inhibitors might "cure cancer in two years" and said nothing about their use for treating eye disease.) And in truth, we don't have that many new drugs to call on anyway. Last year, for example, the U.S. Food and Drug Administration approved only 19 entirely new drugs, many of which treat pretty rare diseases or offer little benefit over older medications.
If the greatest medical advances depend mostly on small but consistent improvements in the use of old drugs, why do certain specialties (such as psychiatry) fall behind others (such as cardiology) in producing major results, like a 50 percent population-wide improvement? The difference isn't related to a lack of drug choices. A psychiatrist now has a bewildering array of medications to treat, say, attention-deficit disorder or depression, just as a cardiologist can choose from dozens of anti-hypertensive pills. And the influence of pharma companies is roughly equivalent in both specialties.
The real problem for lagging specialties is that they possess numerous poorly studied, often recently approved drugs instead of a small core arsenal of older drugs that are well-understood and so can be dosed systematically. As the experience with leukemia shows, that's exactly the wrong way to cure disease. Successful specialties are anchored by centralized, rigorous professional organizations that served, over decades, as clearinghouses for study after study aimed at calibrating therapy. Thus, cardiologists depended on the Framingham Heart Study and the scientific committees of the American Heart Association, pediatric oncologists have the Children's Oncology Group, and children's lung specialists have the Cystic Fibrosis Foundation. These specialties don't pin all their hopes on new miracle cures; instead, they do the grunt work of incremental clinical trials with the pills they have. And as a result, they save many lives.
That doesn't mean duplicating these successes is easy. Just as no automaker has successfully copied Toyota's ingrained kaizen culture (which The New Yorker's James Surowiecki likens to a hard-to-follow "regular, sustained diet"), the incremental doggedness of certain medical subspecialties resists imitation. But the lagging subfields should try.
Doctors in these fields first should take a long, hard look at their priorities. Lagging fields are often the scene of paralyzing turf battles between various institutions over clinical trials. By contrast, the more successful specialties have overcome such pettiness and forged nationwide partnerships to churn out study after study. The successful specialties also encourage studies that choose incremental goals over the big score.
In addition, many poorly studied diseases, psychiatric and otherwise, must be better defined. Doctors easily agree on whether a child has leukemia or a middle-aged man has a blocked coronary artery, and this makes it possible to contrast treatment differences—a key aspect of calibrating therapy. But surprisingly, despite all the editions and revisions of the Diagnostic and Statistical Manual of Mental Disorders, no standardized, reproducible diagnostic criteria exist for many psychiatric diseases, with the upshot that doctors often cannot agree on what's wrong with a patient. (In 2000, the British Journal of Psychiatry reported that researchers often make up their own definitions of schizophrenia to suit their agendas; for example, to make a certain drug look better.) The lack of objective diagnosis also plagues pediatric asthma, ear infections, attention-deficit disorder, migraine headaches, and food allergies, to name a few. And yet good definitions are a prerequisite to any study that compares treatments.
To help increase our incremental understanding of diseases and treatments, the federal government also must invest more in clinical studies of old drugs. For example, the world's first large study of antidepressants for bipolar disorder was published only last year, with funding from the National Institutes of Mental Health. It turned out that relatively newer drugs, like Paxil or Wellbutrin, didn't have much over old-school treatment with lithium or valproic acid. That's important for patients and doctors to know, and many more such studies are needed.
Without discounting the importance of new research, paying more attention to incremental improvement refocuses how we think about medical progress. And it's an upbeat shift in viewpoint, indicating that we already have the tools to cure many diseases and improve many lives. We just need to figure out how to use them better.