Until recently, many Internet critics have feared that such personalization of the Internet may usher in a world in which we see only articles that reflect our existent interests and never venture outside of our comfort zones. Social media, with its never-ending flurry of links and mini-debates, have made some of these concerns obsolete. But the rise of “automated journalism” may eventually present a new and different challenge, one that the excellent discovery mechanisms of social media cannot solve yet: What if we click on the same link that, in theory, leads to the same article but end up reading very different texts?
How will it work? Imagine that my online history suggests that I hold an advanced degree and that I spend a lot of time on the websites of the Economist or the New York Review of Books; as a result, I get to see a more sophisticated, challenging, and informative version of the same story than my USA Today-reading neighbor. If one can infer that I'm also interested in international news and global justice, a computer-generated news article about Angelina Jolie might end by mentioning her new film about the war in Bosnia. My celebrity-obsessed neighbor, on the other hand, would see the same story end with some useless gossipy tidbit about Brad Pitt.
Producing and tweaking stories on the spot, customized to suit the interests and intellectual habits of just one particular reader, is exactly what automated journalism allows—and why it's worth worrying about. Advertisers and publishers love such individuation, which could push users to spend more time on their sites. But the social implications are quite dubious. At the very least, there's a danger that some people might get stuck in a vicious news circle, consuming nothing but information junk food and having little clue that there is a different, more intelligent world out there. And the communal nature of social media would reassure them that they aren't really missing anything. Naturally, it can also be the next step in the evolution of much-hated content farms like Demand Media.
Consider what might happen if, as seems likely, big technology companies enter this business and displace small players like Narrative Science. Take Amazon. Its Kindle e-reader allows users to look up unknown words in the electronic dictionary and underline their favorite sentences; Amazon records and stores such information on its servers. This would come in handy if (when?) Amazon decides to build a personalized and fully automated news digest: After all, it knows what newspapers I read, what kinds of articles attract my attention, what sentences I tend to like, and what words I find puzzling. And I already own their device, where I can read such news digests—for free!
Given all this, the idea that greater automation could save journalism seems short-sighted. However, innovators like Narrative Science are not to blame; used narrowly, their technologies may actually save costs and perhaps even allow some journalists—provided they can keep their jobs!—to pursue more interesting analytical projects rather than rewrite the same story every week.
The real threat comes from our refusal to investigate the social and political consequences of living in a world where reading anonymously becomes a near impossibility. It's a world that advertisers—along with Google, Facebook, and Amazon—can't wait to inhabit, but it's also a world where critical, erudite and unconventional thinking may become harder to nurture and preserve.
This article arises from Future Tense, a collaboration among Arizona State University, the New America Foundation, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.