Medical Examiner

Fertility Clinic Data Means Bad Medicine

Success rates are misleading and often dishonest.

Close Up Of Pregnant Woman Having 4D Ultrasound Scan.
Judge a fertility clinic by its methods, not its numbers.

Photo by Thinkstock

For two decades every fertility clinic in the United States has reported its success rate to the Centers for Disease Control and Prevention. Collecting data is great. It helps researchers focus on areas that can be improved. Publishing nationwide success rates can also calibrate women’s expectations.

The problem is that the CDC publishes the success rate of each individual clinic, and many women seeking fertility treatment choose a doctor based solely on these numbers. That’s a terrible idea. Clinic-specific success rates are misleading in the best of circumstances. They encourage honest clinics to turn away challenging patients and encourage dishonest clinics to commit fraud. They grant an undeserved aura of omnipotence to fertility practitioners with high numbers, leading both patients and the providers themselves to believe their judgment is worth more than the field’s collective evidence. I never thought I’d say this, but we need less transparency in fertility data.

Calculating a fertility clinic’s success rate is simple: Divide the number of single births a clinic produces by the number of attempts it makes, then multiply by 100. (This metric omits twins and other multiple-birth cases because the clinic’s aim is a singleton.) In medicine, however, simple numbers are often problematic. The first step some clinics take to tweak success rates is to turn away the most desperate patients. It’s not always an official policy, but it happens. In 2006 Mother Jones had the following exchange with a fertility medicine specialist about doctors refusing to treat older women:

“How much selecting is going on?” I asked.
“A lot.”
“How much is a lot?”
“A lot.”

Small adjustments to the average age of a clinic’s patient base can dramatically influence the doctors’ apparent performance. In 2011, women younger than 35 who underwent an assisted reproduction cycle gave birth to a live, single baby 27 percent of the time. In women older than 44, the success rate was just 1.1 percent. Refusing to treat patients who have experienced repeated implantation failure—the dreaded RIF in fertility circles—can also artificially boost success rates.

Similarly perverse incentives infect other fields of medicine. Inpatient hospital physicians complain that surgeons sometimes hesitate to operate on last-chance patients because they don’t want to be associated with a death. In those situations, however, peer monitoring prevents a surgeon from systematically turning away difficult cases. There are no such safeguards in fertility medicine. As a result, the clinics that behave most admirably look the worst on paper. The CDC should not stand behind data with so little verification.

“I would like to see a more rigorous monitoring,” says Jeffrey Goldberg, section head of reproductive endocrinology and infertility at the Cleveland Clinic. Goldberg supports the reporting system, but he admits there are credibility issues. “Someone should do surprise audits, going through all the records to make sure the reporting is accurate,” he says. “It would only help the clinics who are reporting honestly.”

There is also a widespread suspicion that less scrupulous clinics commit outright fraud. A clinic is supposed to report to the CDC every time a patient begins a treatment cycle, but it’s easy enough to hold off on the report. If a woman drops out before finishing the cycle, a clinic can simply neglect to report that the patient ever began treatment.

The Society for Assisted Reproductive Technology, the leading professional society for fertility medicine specialists, places limits on how clinics can use their success rates in advertising. Clinics may not, for example, directly compare their numbers with those of competitors. When success rates are mentioned at all, the clinic must include a disclaimer that comparing clinic success rates “may not be meaningful” and lay out some of the reasons discussed above. That’s a good start, but it doesn’t go nearly far enough. Many clinics stake their reputations on those numbers. The home page of Reproductive Medicine Associates of Connecticut, for example, invites prospective clients to “view our impressive pregnancy rates.” They don’t need to say that the numbers are better than those of some competitors—the implication is pretty clear.

The intense focus on success rates also affects the way fertility medicine is practiced. Since each failed cycle damages a doctor’s credibility a little bit, some take a kitchen sink approach to fertility medicine. A classic example is the use of the anti-clotting agent Lovenox. Certain women have a clotting disorder that increases the chance that they will miscarry. “For patients with recurrent miscarriage, we do blood testing,” explains Goldberg. “If they are at risk for blood clots, there is evidence to support putting them on Lovenox.”

That’s good, evidence-based medicine. The problem is that some fertility specialists prescribe Lovenox without recurrent miscarriage, without evidence of a clotting disorder, or without either. “I do not have the clotting disorder, but I had three miscarriages and the doc thought it would be worth a try,” wrote a commenter on the IVF-Infertility message board about her Lovenox injections. “With all of my IUIs and IVF I used Lovenox,” another woman posted on What to Expect.

Doctors who obsess over their success rates might not want to have a patient experience two or three miscarriages and thus damage their statistics, so they reach immediately for the needles. There are serious risks to Lovenox injections, though: Depressing your body’s clotting potential can turn a minor cut into a dangerous bleed.

There is also evidence that fertility clinics are overprescribing antibiotics. The controversial infertility doctor Attila Toth puts men and women through 10-day regimens of intravenous antibiotics with little more than anecdotal evidence to support the practice. Other clinics have picked up the habit. This means the clinic doesn’t have to undertake a failed round of treatment before discovering an undetected infection. It can be costly to the patient, though, and to public health. Heavy antibiotic use disrupts normal gut flora, often leading to gastrointestinal problems, and it promotes drug-resistant bacteria. There is no evidence to support antibiotic use without an infection present.

There’s nothing wrong with collecting clinic-specific data on fertility success rates. That information—combined with a mountain of contextual information and on-the-ground laboratory inspections—could provide useful data points when private accreditation groups review the status of a clinic. Nor is there anything wrong with publishing nationwide data, broken down by age group and type of procedure. Broad statistics like that help women understand their chances before undertaking the long, painful, and expensive process of assisted reproductive treatment. But pushing clinic-by-clinic numbers out every year without enough information to make them meaningful is not a good practice.

In the meantime, if you can’t rely on success rates, how are you supposed to pick a fertility clinic? Cardiologists and oncologists don’t publish their patients’ mortality rates—those numbers would be meaningless—yet patients still find satisfaction with their doctors. Look for the most honest clinic you can find. Seek out providers who state explicitly that they take all comers. The fertility clinics associated with Johns Hopkins and Columbia University, for example, both boast no age limits. Call clinics and ask whether they accept patients with repeated implantation failures, as academic and research centers like the Cleveland Clinic do. If there is a moment’s hesitation—no matter what the clinic’s statistics say—take your business elsewhere.