Medical Examiner

Take the Shrink Challenge

Can a psychiatrist really tell what’s wrong with you?

If a dozen shrinks each interview the same patient, will they arrive at the same diagnosis?

The question has dogged mental-health clinicians for more than 30 years, ever since a famous experiment—a ruse, really—appeared to show that healthy people will be labeled sick if they merely go to a psychiatric emergency room and act sick. Recently, a new study tried to repeat the experiment and failed, supposedly proving that shrinks aren’t as clueless today as they were a generation ago.

But the study’s methods were questionable, and the results aren’t as definitive as the authors make them out to be. Psychiatry still has a problem reliably diagnosing patients and will continue to until researchers better understand mental illness at the level of brain cells and molecules.

In 1973, academic psychologist D.L. Rosenhan sent himself and seven friends and colleagues to the psychiatric emergency rooms of 12 different hospitals. Each told ER workers that for several weeks he or she had been distressed by voices saying “empty,” “hollow,” and “thud.” The testers gave false names and occupations but otherwise accurately reported their histories, which did not include mental illness. In all 12 instances they were admitted to a psychiatric ward. At that point, they stopped pretending to have symptoms. Nonetheless, they were held for an average of 19 days (their stays ranged from seven to 52 days) and were all released with a diagnosis of “schizophrenia, in remission,” or something like it. Rosenhan titled his study “On Being Sane in Insane Places” and argued that psychiatric diagnosis has more to do with the presumptions of clinicians, and their tendency to treat ordinary behavior as pathological when it occurs on a psych ward, than with a rational assessment of symptoms.

The sweeping conclusions that Rosenhan drew from his elegant hoax are debatable. But in her 2004 book, Opening Skinner’s Box: Great Psychological Experiments of the 20th Century, journalist Lauren Slater claimed to have replicated Rosenhan’s results to some degree. She said she visited nine psychiatric ERs incognito and reported having the same auditory hallucinations mentioned in Rosenhan’s study. Although she was never admitted as an inpatient, she says she received multiple prescriptions and was diagnosed with “depression with psychotic features” every time.

This was not supposed to happen. In 1980, the field had overhauled the manual used to classify mental disorders, the Diagnostic and Statistical Manual. Speculative Freudian theories of disease etiology were discarded in favor of straightforward descriptions of pathological behavior and checklists of behavioral symptoms for each diagnosis. The goal was to increase the reliability of psychiatric diagnosis.

In response to Slater, psychiatrists struck back with their own study. A team led by Columbia University’s Robert Spitzer, who spearheaded the revision of the DSM in 1980, sent a survey to 431 ER psychiatrists. The survey presented a Rosenhan-model vignette—a person without a history of mental illness says she is bothered by a voice saying “thud.” Of the 74 psychiatrists who responded, 80 percent said they would not give a firm diagnosis without more information, 82 percent said they would send the patient to an outpatient clinic rather than recommend hospitalization, and 66 percent said they would not prescribe medication. The study was published last November in the Journal of Nervous and Mental Disease. The editors gave Slater space to respond, and she belittled Spitzer’s reliance on surveys rather than real testers.

Spitzer relied on a survey for practical reasons—these days, sending pseudo-patients to ERs would be expensive and ethically dubious. But the survey method conveniently sidesteps many of the variables that continue to plague psychiatric diagnosis. I was a social-work clinician in a community mental health center in Seattle for nearly two years. Most patients coming through my office had received more-or-less consistent diagnoses, from many different clinics, over the course of their illness. But a significant minority had not.

Perhaps the most important reason for a wrong diagnosis is the lack of time most clinicians have to do the job. The initial interview with a patient usually lasts less than an hour. Many are defensive or show ambiguous symptoms. Yet the rules of insurance reimbursement are relentless—you have to come up with an immediate diagnosis and treatment plan, which usually means a medication trial. Often follow-up “med checks” last only 15 to 20 minutes and occur just every few weeks or months. Even if patients are admitted to a hospital, they rarely stay longer than a few days. In these circumstances, a hasty initial diagnosis may never get revisited.

In addition, clinicians tend to overdiagnose diseases that they see a lot of. A doctor surrounded by schizophrenia at a clinic that serves the poor (the illness is so disabling that victims tend to lack private insurance) may begin to see the disease even more often than it actually appears. Then there’s the mundane problem of missing medical records. Clinical history helps put current symptoms into context. But because of the nature of mental illness, many patients cannot, or will not, reveal information about their past psychiatric treatment. And if a patient agrees to have her chart sent for, it often arrives too late to make much difference.

Of course, doctors in other specialties face time constraints and other threats to accurate diagnosis. But unlike psychiatrists, they usually have a molecular definition of disease to go on and biological tests to administer. The current lack of molecular knowledge in psychiatry is no fault of psychiatrists; the human brain is complex and difficult to experiment on. But it cannot be denied that the DSM is not a collection of diseases so much as syndromes—groupings of symptoms that may have many different molecular causes. Because the molecular causes are largely unknown, biological tests don’t exist, and a psychiatrist making a diagnosis is left without the lab results that in other areas of medicine help correct doctors’ subjective impressions.

This may change. Last November, several researchers reported the creation of a computer algorithm that can differentiate, with 81 percent accuracy, the MRI images of schizophrenic brains compared with healthy ones. And some clinical trials have already begun to track how the presence of certain genes influences a patient’s response to medication. The cost of sequencing a patient’s genome has dropped by a thousandfold in the last five years, so genetically based psychiatric studies should soon become commonplace.

None of this means that psychiatrists will develop a magic diagnostic test, though. After all, genes only tell you so much. There are, for example, many genes implicated in schizophrenia, and a genetic predisposition does not guarantee illness. (If one identical twin gets schizophrenia, there is a 50 percent chance the other one will.) Even a brain scan isn’t clear-cut. For example, many healthy family members of schizophrenics have been found to have subtle schizophrenic symptoms. If an entire family were to show different degrees of the illness on an MRI, establishing who gets an official diagnosis and who does not would still be a matter of judgment.

The Rosenhan study, which is still mentioned in undergraduate textbooks, continues to be an albatross for psychiatry. Working with the tools available to his generation of psychiatrists, Spitzer has done his best to put the profession on a scientific footing. But the psychiatrists who will integrate psychiatry into medicine—by finally linking the study of the mind to the study of the brain—have just begun to get to work.