Nonsense excerpt: Our medical system’s obsession with certainty.

How Medicine’s Obsession With Certainty Can Hurt Patients’ Health

How Medicine’s Obsession With Certainty Can Hurt Patients’ Health

The citizen’s guide to the future.
Oct. 12 2015 7:39 AM
FROM SLATE, NEW AMERICA, AND ASU

Doctors Hate Ambiguity

How an obsession with certainty can hurt patients’ health.

151011_FUT_NatalieRamo_NervousPatient

Illustration by Natalie Matthews-Ramo

This essay is adapted from Nonsense: The Power of Not Knowing, by Jamie Holmes, published by Crown.

On the evening of Tuesday, Oct. 13, Holmes will discuss Nonsense with New Yorker contributing writer Maria Konnikova at a Future Tense event in New York. For more information and to RSVP, visit the New America website.

Advertisement

In 2005, University of Rochester researchers published a study on medical ambiguity. Led by psychologist David Seaburn, the investigators hired actors to pose as patients and visit local physicians. During some visits, the “patients” described the classic symptoms of gastroesophageal reflux. During other visits, ambiguous symptoms were described: some emotional stress, dizziness, fatigue, and mysterious chest pains. Each visit was secretly audiotaped, and then the researchers transcribed and coded the physician-patient interactions.

Their results were troubling. Twenty-two percent of the time, the physicians simply ignored the ambiguity. For instance, after a patient described “vague symptoms of general chest pain,” the doctor would respond with a statement of “fact”: “Your pain is caused by gastroesophageal reflux.” Seventy-seven percent of the time, they acknowledged that the symptoms were unclear, only to follow up with a directive such as ordering a test. The doctors simply were not inclined to discuss the ambiguous symptoms.

Studies like Seaburn’s have led to increased attention to medical ambiguity in recent years. Psychologists have shown that there is a fundamental tension between the ubiquity of ambiguity and our natural preference for definite answers. Misunderstanding that tension and putting too much faith in tests to resolve ambiguity, it turns out, is one cause of medical overtesting.

Ordering a test is often appropriate, but it can also be an all-too-easy response to ambiguity. Furthermore, tests results aren’t always clear, which can lead to a cascade of unnecessary tests, a phenomenon identified in a 2013 experiment led by Cornell’s Sunita Sah. Sah, Pierre Elias, and Dan Ariely recruited a group of more than 700 men between the ages of 40 and 75 and randomly assigned them to one of four conditions. One group received information about the risks and benefits of a prostate biopsy. The other groups also received one of three hypothetical results from the prostate-specific antigen screening test, which informs the decision to have a biopsy: normal, elevated, or inconclusive. An inconclusive test result, subjects were informed, “provides no information about whether or not you have cancer.” Would the men proceed with the hypothetical biopsy?

151011_FUT_BOOK_Nonsense
Advertisement

Of subjects who weren’t given the PSA screening results, only 25 percent chose to proceed with the prostate biopsy. But 40 percent of subjects who received inconclusive PSA test results opted for the procedure. That’s a meaningful increase among those who received a result clearly explaining that it “provides no information.” Somehow, the very idea of not knowing something led to a panicky commitment to more invasive testing. Since prostate biopsies are not only risky but costly, the increased call for the biopsy is not insignificant. Sah and her colleagues described the problem as one of “investigation momentum.”

Investigation momentum, Sah told me, results in “additional, potentially excessive diagnostic testing when you get a result that’s ambiguous.” She doesn’t deny that there are many other causes of overtesting in the United States. The financial incentives involved are a mammoth issue, as is defensive medicine, whereby doctors treat patients to avoid potential lawsuits. But one overlooked cause, Sah said, is the self-propelling cascade of tests encouraged because of inconclusive results, ambiguity aversion, and a disproportionate faith in testing.

Diagnostic imaging technologies, in particular, seem prone to producing ambiguous results. Medical scans can be extraordinarily good at detecting abnormalities, but they are not always very good at revealing whether those abnormalities actually pose a problem.

In one quasi-experiment, orthopedist James Andrews ran MRIs on pitchers and found abnormal rotator cuff damage and abnormal shoulder cartilage in almost all of them. But there was a catch: They were all healthy pitchers. A 2014 study found that 1 in 5 breast cancers discovered by mammography and treated wasn’t actually a health threat. In another study, the use of CT scans or ultrasound to diagnose appendicitis increased from less than 10 percent at the start of the 1980s to more than 30 percent at the end of the 1990s. But the rate of misdiagnosed cases held steady at about 15 percent. Studies of autopsies before and after the introduction of ultrasound, CT, and radionuclide scanning, similarly, revealed that diagnostic accuracy hasn’t seemed to improve. The increase in imaging tests isn’t always justified, which is why the ABIM Foundation’s Choosing Wisely initiative has singled out the overuse of diagnostic imaging as a problem.

Advertisement

One contributing factor is that we put too much faith in technology to physically pinpoint the causes of poor health. In neurolaw, which applies brain imaging to criminal law, this dilemma is especially apparent. Neuroimaging evidence showing brain abnormalities has helped spare murderers from the death penalty. According to a database created by Duke University’s Nita Farahany, such evidence was considered in at least 1,600 cases between 2004 and 2012. Often that evidence’s effectiveness depends on locating abnormalities in the brain. One San Diego defense attorney boasted of introducing a PET scan as evidence of his client’s moral innocence: “This nice color image we could enlarge. ... It documented that this guy had a rotten spot in his brain. The jury glommed onto that.”

James Fallon, a neuroscientist at the University of California–Irvine who has studied the brain scans of psychopathic murderers, is skeptical of applying brain scans to criminal cases. “Neuroimaging isn’t ready for prime time,” he told me. “There are simply too many nuances in interpreting the scans.” In an odd twist of fate, Fallon once subjected himself to a PET scan because his lab needed images of normal brains to contrast with abnormal ones. To his surprise, his prefrontal lobe looked the same as those of the psychopathic killers he’d long studied. The irony wasn’t lost on him. That Fallon never hurt anyone isn’t the point; it’s that one nonviolent person’s scan looked no different from a violent person’s.

Without question, doctors have used imaging technologies to great benefit, just as scientists are learning a great deal using brain scans. But images of the brain, like those of the rest of the body, do not always imply one-to-one causal relationships. Like abnormal should cartilage, brain abnormalities don’t mean that anything is necessarily wrong. Amanda Pustilnik, who teaches law at the University of Maryland, has compared neurolaw to phrenology, Cesare Lombroso’s biological criminology, and psychosurgery. Each theory or practice, Pustilnik wrote, “started out with a pre-commitment to the idea of brain localization of violence.” But the causes of violence, like the causes of poor health, do not usually begin in the body. They pass through it, and the marks they leave are often subtle and vague.

No one can blame practitioners or policymakers for their enthusiasm over new technological tools. But just as criminologists would be wiser to focus on the social—rather than biological—conditions that spur violence, physicians are usually better off treating the patient, not the scan. Most diagnoses, we know, can be made by chatting. New ways of seeing aren’t necessarily clearer ways of seeing, and sometimes, the illusion of knowing is more dangerous than not knowing at all.

Adapted from Nonsense: The Power of Not Knowing Copyright © 2015 by Jamie Holmes. Published by Crown Publishers, an imprint of Penguin Random House LLC.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.