Between the early 1970s and the late 1990s, the long-term survival rate of children with leukemia skyrocketed from less than 20 percent to around 80 percent. Over this relatively short period, many children presumed to be dying instead ended up living. As remarkable as the surge is the reason for it. Dr. Steve Sallan, the chief of staff at the Dana-Farber Cancer Institute in Boston, recently told me that not a single newly discovered drug was involved. Nobody invented some magical genetic therapy either. So what changed?
Too often, medical advances get advertised as the work of swashbuckling doctors and patients who take big risks against big odds and seize miraculous results with new treatments taken straight from the lab. That narrative is misleading. As with pediatric leukemia, the reality often is far less dramatic but no less impressive, and therein lie critical lessons for patients with many chronic, tough-to-treat diseases like asthma, attention-deficit disorder, and obesity.
The leukemia doctors saved lives simply by refining the use of old-school drugs like doxorubicin and asparaginase. Over the course of almost a dozen clinical trials, they painstakingly varied the doses of these older drugs, evaluated the benefit of continuing chemotherapy in some kids who appeared to be in remission, and tested the benefit of injecting drugs directly into the spinal column. The doctors gradually learned what drug combinations, doses, and sites of injection worked best. And they kept at it. With each small innovation, survival rates crept forward a bit—a few percent here and there every couple of years—and over decades those persistent baby steps added up to a giant leap.
Today, we're far more likely to hear exaggerated tales of breakthrough new drugs, aggressively marketed and hyped. But it's the leukemia story that's the historical norm. Back in the early 20th century, for example—decades before the discovery of antibiotics—tuberculosis mortality fell almost 70 percent (subscription required) due largely to careful studies of nutrition and hygiene. From 1980 to 2000, death from heart disease plummeted an astonishing 50 percent, almost entirely from the use of existing medicines and surgical treatments. These were gradually tweaked, like leukemia therapy, in response to scores of incremental studies. During the past 30 years, mortality from diabetes in men also has decreased by half, largely due to improved use of flu vaccines, smoking reduction, and possibly aspirin use—but not a new blockbuster drug.
Of course, new drugs can sometimes change everything. Example: Genentech's novel angiogenesis inhibitor Lucentis, which restored vision in patients (subscription required) with macular degeneration. But such successes are incredibly rare and even in cases like Lucentis, often unforeseen. (James Watson, the co-discoverer of DNA, imprudently predicted in 1998 that angiogenesis inhibitors might "cure cancer in two years" and said nothing about their use for treating eye disease.) And in truth, we don't have that many new drugs to call on anyway. Last year, for example, the U.S. Food and Drug Administration approved only 19 entirely new drugs, many of which treat pretty rare diseases or offer little benefit over older medications.
If the greatest medical advances depend mostly on small but consistent improvements in the use of old drugs, why do certain specialties (such as psychiatry) fall behind others (such as cardiology) in producing major results, like a 50 percent population-wide improvement? The difference isn't related to a lack of drug choices. A psychiatrist now has a bewildering array of medications to treat, say, attention-deficit disorder or depression, just as a cardiologist can choose from dozens of anti-hypertensive pills. And the influence of pharma companies is roughly equivalent in both specialties.