When Elizabeth Loftus began to plant false memories to test their therapeutic benefits, the memories seemed innocuous. They weren't about families or politics. They were just about food. You went into the experiment thinking you'd always loved strawberry ice cream, and you came out thinking it had made you sick. Big deal.
But successful ideas have a way of spreading. Soon, Loftus was trying the same technique on alcohol. And she was starting to think about tampering with the kinds of memories whose authenticity had once anchored her career: memories of crime and abuse.
A decade earlier, when she first wrote and testified against recovered memories of sexual abuse, her concern had been that such memories might be false. They could send innocent people to jail. But the more she looked into these cases, the more she noticed a second problem: Regardless of their truth or falsehood, memories of incestuous abuse were hurting the women who had recovered them. Their families were torn apart. Many were losing their jobs, ending their marriages, contemplating suicide, or mutilating themselves.
Her objections to recovered-memory therapy grew. The likely falsity of the memories was just the first problem. The second problem was that even if some of the memories were true, they could still be harmful. And even if they were true and helpful, the therapy might still be harmful on balance. Recovered-memory therapy, like any drug or medical procedure, should be judged by its total costs and benefits across the population, Loftus argued. If it hurt too many patients, its use might be unjustified "even if the benefits for actual victims can be shown." Perhaps, despite the truth, it was better not to remember.
By the end of the 1990s, science was developing new ways to deaden such memories. A study published in 2000 showed that in rats, fearful recollection of an electric shock could be blocked by injection of a drug called anisomycin. Loftus was intrigued. "We're on the brink of being able to figure out how you might accomplish something like memory engineering," she suggested. Patients could be prompted to recall traumatic incidents, she speculated, and drugs could be administered to prevent the memories' reconsolidation.
In 2002 and 2003, studies indicated that another drug, propranolol, could prevent or reduce post-traumatic stress disorder in humans. Adam Kolber, a law professor at the University of San Diego, monitored the research and talked to Loftus about its legal and moral implications. She was fascinated. She went back to her lab determined to get involved. She started with attitudinal research, asking people whether they would take a memory-dampening drug after being mugged and beaten. Nearly half wanted the right to take the drug, but only 14 percent said they would do it. She was surprised. If she had endured such an assault, she decided, she would take the drug.
She understood propranolol's legal implications. Its main effect was to dampen the emotional content of traumatic memories. But to a lesser extent, it also dampened their factual content. A victim who took the drug might lose her ability to testify convincingly against her assailant. Even in this circumstance, Loftus concluded, memory dampening should be permitted. The overriding principle, she argued, was freedom of choice.
But would freedom of choice survive therapeutic deception? Would it survive social pressure to fix unhealthy memories and habits? Loftus and her colleagues were already presenting memory therapy as an alternative to coercion. In their article on fattening food, they warned that unless behavioral scientists stemmed the obesity crisis, laws might be imposed to induce healthier eating, just as "seatbelt laws were imposed upon us when people were not using them on their own." In their article on alcohol, they offered memory doctoring as an alternative to electric shock and other "invasive" aversion therapies.
Society had an obvious interest in purging traumatic memories. These memories, Loftus and her coauthors noted, caused "significant costs to sufferers, their families, and society," such as traffic deaths and reduced productivity. A similar case could be made for erasing bigotry, an idea that had interested Loftus for years. Memory doctors might be "useful for curing societal ills such as social prejudice," she suggested in 2001. Prejudice might be based "on a few incidents involving a unique group of people, so the memory doctor could wipe out or alter memory of these incidents."
Loftus never endorsed such treatments without the patient's consent. But she was stretching the definition of consent to fit memory therapy. To plant false food memories without violating informed consent, she and a colleague proposed in 2009 that
a therapist might ask for blanket permission in an early session to use various techniques to bring about a positive outcome—a list that might include the planting of false beliefs. If permission is granted, then, much later, when the permission session is long forgotten, the suggestive technique might be attempted.
The proposal sounded like one of Loftus' deception experiments. But it wasn't an experiment. It was the procedure to obtain consent to the deception. Using her expertise in memory's vulnerabilities, she was figuring out how to manipulate people into authorizing their manipulation. In this way, a memory doctor could justify herself.
Loftus answering a question at the Center for Inquiry's World Congress, April 2009
Maybe the doctor would stop at fixing the patient's eating behavior. Maybe she would move on to trauma or prejudice. The patient, having forgotten the initial mention of memory manipulation, would be none the wiser. And once memories were altered, there was no going back. In this respect, Loftus theorized, the brain worked like a computer: "You call up a file, edit it and then put the revised file back. The original is lost."
Worse, there was no way to distinguish altered memories from originals. For three decades, Loftus looked for telltale signatures that might help judges and juries. She tried everything: confidence, vividness, emotionality, brain scans. She found some differences on average, but nothing that reliably identified a false memory. In 2009, she and her colleagues concluded that "it might be virtually impossible to tell reliably if a particular memory is true or false without independent corroboration."
And where would the corroboration come from? Documents? Photographs? Video? With digital technology, such evidence could easily be altered or fabricated. Two weeks ago, Slate did just that, editing several images to plant false political memories. (The experiment ran here; you can read the results here and view a slide show of the altered images here.) Even DNA could be faked: In 2009, scientists reported that they had manufactured a blood sample sufficiently incriminating to fool a forensics lab. Altered evidence, in turn, could alter memories, as Loftus herself had proved. False evidence and false memories would corroborate each other.
This wasn't the future Loftus had envisioned when she first fantasized about doctoring memories. But seeing the future was never her forte. She didn't foresee the memory doctors of the 1990s, either. In 2002, she wrote,
While musing about the hypothetical memory doctor in 1980, I could not have known that a version of the memory specialist was in the making. These "repressed memory therapists" would go out and prospect for early childhood memories of trauma, and in the process they inadvertently created false memories of the most unimaginable kind. The memory doctors I had speculated about in 1980 were supposed to use their talents to help people. The memory doctors of the 1990s went in the wrong direction.