Lexicon Valley

That Study on Literary Fiction and Empathy Proves Precisely Nothing

A statue of Russian author and playwright Anton Chekhov in central Moscow.

Photo by NATALIA KOLESNIKOVA/AFP/Getty Images

Earlier this month, social psychologists Emanuele Castano and David Comer Kidd—a professor at the New School for Social Research and a Ph.D. candidate there, respectively—published a paper titled “Reading Literary Fiction Improves Theory of Mind” in the journal Science. From the abstract:

Understanding others’ mental states is a crucial skill that enables the complex social relationships that characterize human societies. Yet little research has investigated what fosters this skill, which is known as Theory of Mind (ToM), in adults. We present five experiments showing that reading literary fiction led to better performance on tests of affective ToM and cognitive ToM compared with reading nonfiction, popular fiction, or nothing at all.

In an Editor’s Summary that accompanies the paper, Theory of Mind is further defined as “the human capacity to comprehend that other people hold beliefs and desires and that these may differ from one’s own beliefs and desires.” In other words, empathy.

Needless to say, the study has received considerable media attention. “For Better Social Skills, Scientists Recommend a Little Chekhov,” reported the New York Times on its Well blog. “Reading Literary Fiction Improves ‘Mind-Reading Skills,” said the online magazine ScienceDaily. “Now We Have Proof Reading Literary Fiction Makes You a Better Person,” asserted the Atlantic.

A better person. Wow. So what’s the basis for all of these incredible claims? How is it that the researchers were able to conclude, as they put it, that “literary fiction, which we consider to be both writerly and polyphonic, uniquely engages the psychological processes needed to gain access to characters’ subjective experiences”?

Let’s unpack this just a bit. In one of the experiments, which is representative of the other four, some of the participants were assigned one of three short stories deemed, by Castano and Kidd of course, to be literary fiction—The Runner by Don DeLillo, Blind Date by Lydia Davis, or Chameleon by Anton Chekhov. Others were given one of three Smithsonian magazine articles—“How the Potato Changed the World,” “Bamboo Steps Up,” and “The Story of the Most Common Bird in the World” (Spoiler Alert: It’s the house sparrow). According to the researchers’ own cherry-picked criteria, the works of fiction had to depict “at least two characters” and the nonfiction works had to be about “a nonhuman subject.” All of the participants were then administered a series of tests to determine their relative empathy and social adeptness—including, for example, one called “Reading the Mind in the Eyes,” in which one must choose from a series of adjectives to describe the emotions of people based on photographs of just their eyes. Those who read the literary fiction performed ever so slightly better.

Okay, time for a thought experiment. Imagine a study that purports to “show” that Ivy League students are more socially sensitive than students at public universities or students at private colleges not among the Ancient Eight. For the first experiment, researches choose three Harvard students who exemplify, in their opinion, the best characteristics of that fine institution, and three students from the University of Michigan, again selected to represent the authors’ idea of what such students should be like. They then subject these six students to a battery of tests of empathy and social intelligence, and find that the three Harvard students scored a bit better than the three Michigan students. What grand conclusions could you draw?

The short answer is none. It would be wildly inappropriate to conclude anything at all about Harvard students vs. Michigan students based on tests of three representatives of each set, hand-picked by researchers who admittedly wanted to find a way to support their pre-existing belief that Harvard students are emotionally superior. It is, of course, just as inappropriate to conclude anything about literary fiction vs. popular fiction or literary fiction vs. nonfiction, based on a comparison of a very small number of short excerpts selected by the researchers to be somehow typical or characteristic of the genre—especially given that the researchers chose the samples in an attempt to get exactly the results that they got, even requiring the nonfiction samples to be about a nonhuman subject.

Writing in Slate yesterday, Mark O’Connell offered an admirably more skeptical take on the study and expressed ambivalence:

… about the question of whether reading literary fiction really does make you a better person—not just about what the answer might be, but whether the question itself is really a meaningful one to be asking at all. It implies a fairly narrow and reductive legitimation of reading. There’s a risk of thinking about literature in a sort of morally instrumentalist way, whereby its value can be measured in terms of its capacity to improve us.

The real question here, in my opinion, is why Science chose to publish a study with such obvious methodological flaws. The answer, alas, is that Science is very good at guessing which papers are going to get lots of press, a motivation that seems patently behind their editorial decisions.

So go ahead and read all the Chekhov you want. Heck, read everything he’s ever written. But don’t expect to be a better person for having done so.

A version of this post originally appeared in Language Log.