Comparing antidepressants.

Comparing antidepressants.

Health and medicine explained.
April 18 2006 4:07 PM

Drug Haze

What do you learn when you compare one antidepressant to another?

Illustration by Robert Neubecker. Click image to expand.

If you seek medical treatment for depression, your doctor will likely prescribe a mood-stabilizing drug like Prozac or Celexa. Then comes the Big Wait. It takes six to eight weeks to tell whether the pills are working. If you're one of the two-thirds of unlucky patients who don't respond to the first round of treatment, your doctor will have to increase the dosage or try something else. Then you wait some more.

Thanks to a flurry of research since the approval of Prozac in 1987, there are now 20 antidepressants (many of them "selective serotonin reuptake inhibitors"*) to choose from. No one really knows why some of the drugs work on some people and not others—or how they affect brain chemistry in general. And there is little comparative data that rates or ranks antidepressants, which are the third most-prescribed drugs in the United States. So, to choose among them, doctors have relied on their clinical experience, balanced against what an insurance company will cover and the inevitable spin from pharmaceutical reps. "You make your best guess," explains Andrew Nierenberg, associate professor of psychiatry at Harvard Medical School.


Now the National Institute of Mental Health is publishing results that compare different treatments for depression, including types of drugs, dosages, and even psychotherapy. Called the Sequenced Treatment Alternatives to Relieve Depression, or STAR*D, the NIMH study is welcome news for the 19 million Americans who suffer from depression. But it also underscores the need for more comparative data—not just for mental-health drugs, but in all categories of health care.

Comparative data isn't coming from the pharmaceutical industry. To get FDA approval, a drug firm has to show only that its product is safe and works better than a placebo, not that it works better than another drug already on the market. Some companies do comparative research in hopes of making promotional claims. But they're not required to make such studies public. So, if the results are unfavorable, they often get buried. And independent studies that compare drugs are generally small in scope, focused on specific populations, or conducted abroad.

STAR*D, on the other hand, cost $35 million, took six years to complete, and followed 3,000 real-world patients, as opposed to the cherry-picked volunteers used in many company trials. Researchers evaluated treatments for patients who didn't at first respond to the antidepressant citalopram (the generic for Celexa). The latest STAR*D report found that one in three of these patients got better when another drug was added to their treatments, and one in four improved by switching regimens completely. Subsequent findings on third and fourth treatments are still to come.

Now that Medicare is covering prescription drugs, the federal government has new reason to care which one work best. Why pay for a pricey new drug if there's a cheaper generic that delivers similar value? The Medicare Modernization Act, which was passed in 2003, set aside $15 million a year for the Agency for Healthcare Research and Quality to review comparative data, and some studies have begun with that funding.

States have reason to be cost-conscious, too, as drugs continue to eat up a bigger chunk of taxpayer-funded Medicaid budgets. In 2001, Oregon projected a 60 percent increase in drug spending for the following two years. After a battle in the legislature, the state created a preferred drug list, which ranks medications by cost and effectiveness. Fifteen states have since joined what's known as the Drug Effectiveness Review Project, which compares drugs in 26 categories, including statins, proton-pump inhibitors, nonsteroidal inflammatory agents, and antidepressants. At $110,000 per study, the reviews are relatively cheap. The downside is that there's often little comparative data to evaluate. For example, there are virtually no such studies on opioid analgesics, such as OxyContin or long-acting morphine.

And most of the comparative information that does exist may be useless, since it comes from drug firms. "It's not unusual for less than 10 percent of studies to meet our standards of quality," says Mark Gibson, deputy director of the Center for Evidence-Based Policy at Oregon Health and Science University.

To get better data, the FDA could require drug companies to list all comparative trials in a national registry so that researchers can follow up on the results and check methodology. Or the companies could be required to fund these studies at independent research labs. Or the federal government could pay for more new trials. The National Institutes of Health, for example, has tried to fill in the gap by conducting massive comparative trials for drugs that treat cancer and heart disease, among other illnesses.

The pharma lobby argues that comparative studies will shut out their latest and greatest inventions by minimizing differences between drugs. A medication that may cause nausea less frequently, for example, won't rate higher for effectiveness. The Drug Effectiveness Review Project's review of 10 newer antidepressants, which was released early last year and then updated twice, found that the drugs did not significantly differ in actually treating depression, though they varied in terms of cost, severity of side effects, and how frequently they had to be taken. It's up to the states to weigh these other variables. And policy-makers may conclude that, nausea or no, it makes sense to start with a cheaper antidepressant, and to consider more expensive ones only if the first drug doesn't work.

  Slate Plus
Behind the Scenes
July 31 2015 9:30 AM Getting It Right We ain’t perfect. Corrections czar Miriam Krule on Slate‘s policies on factual errors and typos.