And as it turns out, this is what we’re doing with Google and Evernote and our other digital tools. We’re treating them like crazily memorious friends who are usually ready at hand. Our “intimate dyad” now includes a silicon brain.
Recently, a student of Wegner’s—the Columbia University scientist Betsy Sparrow—ran some of the first experiments that document this trend. She gave subjects sentences of random trivia (like "An ostrich's eye is bigger than its brain" and "The space shuttle Columbia disintegrated during reentry over Texas in Feb. 2003.") and had them type the sentences into a computer. With some facts, the students were explicitly told the information wouldn't be saved. With others, the screen would tell them that the fact had been saved, in one of five blandly named folders, such as FACTS, ITEMS, or POINTS. When Sparrow tested the students, the people who knew the computer had saved the information were less likely to personally recall the info than the ones who were told the trivia wouldn't be saved. In other words, if we know a digital tool is going to remember a fact, we're slightly less likely to remember it ourselves.
We are, however, confident of where in the machine we can refind it. When Sparrow asked the students simply to recall whether a fact had been saved or erased, they were better at recalling the instances where a fact had been stored in a folder. As she wrote in a Science paper, "believing that one won't have access to the information in the future enhances memory for the information itself, whereas believing the information was saved externally enhances memory for the fact that the information could be accessed." Each situation strengthens a different type of memory. Another experiment found that subjects were really good at remembering the specific folder names containing the right factoid, even though the folders had extremely unremarkable names.
"Just as we learn through transactive memory who knows what in our families and offices, we are learning what the computer 'knows' and when we should attend to where we have stored information in our computer-based memories," Sparrow wrote.
You could say this is precisely what we most fear: Our mental capacity is shrinking! But as Sparrow pointed out to me when we spoke about her work, that panic is misplaced. We’ve stored a huge chunk of what we “know” in people around us for eons. But we rarely recognize this because, well, we prefer our false self-image as isolated, Cartesian brains. Novelists in particular love to rhapsodize about the glory of the solitary mind; this is natural, because their job requires them to sit in a room by themselves for years on end. But for most of the rest of us, we think and remember socially. We’re dumber and less cognitively nimble if we're not around other people—and, now, other machines.
In fact, as transactive partners, machines have several advantages over humans. For example, if you ask them a question you can wind up getting way more than you’d expected. If I’m trying to recall which part of Pakistan has experienced tons of U.S. drone strikes and I ask a colleague who follows foreign affairs, he'll tell me "Waziristan." But when I queried this online, I got the Wikipedia page on "Drone attacks in Pakistan." I wound up reading about the astonishing increase of drone attacks (from one a year to 122 a year) and some interesting reports about the surprisingly divided views of Waziristan residents. Obviously, I was procrastinating—I spent about 15 minutes idly poking around related Wikipedia articles—but I was also learning more, reinforcing my generalized, “schematic” understanding of Pakistan.
Now imagine if my colleague behaved like a search engine—if, upon being queried, he delivered a five-minute lecture on Waziristan. Odds are I'd have brusquely cut him off. "Dude. Seriously! I have to get back to work." When humans spew information at us unbidden, it's boorish. When machines do it, it’s enticing. And there are a lot of opportunities for these encounters. Though you might assume search engines are mostly used to answer questions, some research has found that up to 40 percent of all queries are acts of remembering. We're trying to refresh the details of something we've previously encountered.
If there’s a big danger in using machines for transactive memory, it’s not about making us stupider or less memorious. It’s in the inscrutability of their mechanics. Transactive memory works best when you have a sense of how your partners' minds work—where they're strong, where they're weak, where their biases lie. I can judge that for people close to me. But it's harder with digital tools, particularly search engines. They’re for-profit firms that guard their algorithms like crown jewels. And this makes them different from previous forms of transactive machine memory. A public library—or your notebook or sheaf of papers—keeps no intentional secrets about its mechanisms. A search engine keeps many. We need to develop literacy in these tools the way we teach kids how to spell and write; we need to be skeptical about search firms’ claims of being “impartial” referees of information.
What’s more, transactive memory isn’t some sort of cognitive Get Out of Jail Free card. High school students, I’m sorry to tell you: You still need to memorize tons of knowledge. That’s for reasons that are civic and cultural and practical; a society requires shared bodies of knowledge. And on an individual level, it’s still important to slowly study and deeply retain things, not least because creative thought—those breakthrough ahas—come from deep and often unconscious rumination, your brain mulling over the stuff it has onboard.
But you can stop worrying about your iPhone moving your memory outside your head. It moved out a long time ago—yet it’s still all around you.