Future Tense

An A.I. Competed for a Literary Prize, but Humans Still Did the Real Work

492159066
A.I. authors won’t be winning any prizes. Not for now, at least.

RedlineVector/thinkstock.com

The tech and science press was abuzz this week with reports that artificial intelligence had crossed a new threshold, crafting a story that managed to compete for a prize. As usual, this supposed advancement inspired a combination of amazement and handwringing. “A Japanese A.I. Wrote a Novel, Almost Wins Literary Award,” one typical headline read. Meanwhile, Digital Trends worried that “no occupation is safe” if an algorithm could compete in such a contest.

Look a little closer, however, and this story isn’t about the rise of the machines—it’s a lesson on the limitations of contemporary A.I. technology. Far from replacing humans, A.I. is actually working with them, potentially complementing the creative process but not yet changing it outright.

The first English-language accounts of this fictionally-inclined A.I. came through Japan News, which reports that several teams submitted novels to the Nikkei Hoshi Shinichi Literary Award (a prize in its third year) that had been “coauthored” by computers. That “coauthored” caveat is important, especially when you learn that one of the texts in the competition was titled The Day a Computer Writes a Novel. The idea that a computer “wrote” a novel about a computer evinces just how much humans involved themselves. While a monkey at a typewriter might eventually write Hamlet, it probably wouldn’t end up writing a play about monkeys writing Hamlet first, which is what seems to have happened here.

Much has been made of the text’s final sentences—the only bit of this computer “written” prose that has appeared in English thus far: “The day a computer wrote a novel. The computer, placing priority on the pursuit of its own joy, stopped working for humans.”

“Did you just get goosebumps?” Bustle asks, before going on to admit that “it’s not as scary as it sounds.” And part of the reason that it’s not “scary,” of course, is that these lines were clearly written, or at least directed, by human hands. Citing “one of the professors who worked on the project,” the Los Angeles Times reports that “The level of human involvement in the novels was about 80%.”

As Japanese publication Ashai Shimbum explains, the research team first wrote a novel of their own and then broke it down into its component parts. Only then did the A.I. involve itself, arranging the parts it had been given to create “another story similar to the sample novel,” building it from words, phrases, characters, and plot outlines that had been fed to it. The Los Angeles Times claims that this means that the computers “did the hard work,” which is true only if you consider plagiarism “hard.”

None of the English-language articles on the topic provide many details about the Nikkei Hoshi Shinichi Literary Award screening process, but it’s worth noting that an A.I. text only made it past the first of four rounds. Given that 1,450 novels were submitted for initial screening (11 of which were computer creations), it seems unlikely that the first round judges spent much time with their content—I’d wager that they were simply evaluating whether applicants had filed their submissions properly. (This actually is a task that, ironically, is better suited to machines than people.) In this sense, it seems more accurate to say that an A.I. assembled something that formally resembles a novel than to suggest that it has mastered novelistic form.

You can get a small sense of how such assemblages work from Magic Realism Bot. A delightful Twitter creation, it spits out short descriptions of possible imaginary tales by recombining pre-existing narrative forms and stories. “An admiral saves up his money in order to buy three wishes,” one exeplary recent tweet reads. Magic Realism Bot works so well partly because a certain expectation of surreality is baked into its narratives, which helps counteract the inevitable bafflement of recombinatory computer creations. Indeed, literary algorithms almost always seem to work best when they’re producing the kind of texts such as contemporary poems in which we expect to find confusing elements. Lacking a theory of mind—a set of beliefs about what others are thinking—these programs can’t really predict what it will be like to read their output. Accordingly, they can only work from what they already know, which means that they’re bound to be slightly incoherent without human intervention.

Despite all this, news of The Day a Computer’s “success” may have resonated in part because it came on the heels of a considerably more impressive accomplishment, the triumph of Google’s AlphaGo in its five-game match with Go grandmaster Lee Sedol. Far more complex than chess, Go was long considered a uniquely human game, making it a final frontier for A.I. Accordingly, AlphaGo’s very real victories are a significant accomplishment. Its machine-learning algorithms were, however, still learning from Sedol, incorporating his best plays into its own repertoire—showing that however much humans may be coming to rely on A.I., A.I. still relies on humans even more.

If A.I. really does start mucking with the creative process, then, it seems most likely that it will function in a primarily collaborative capacity. If you want an example of what this could entail, study the increasingly sophisticated predictive typing functions of our phones, which have gone beyond merely correcting our past mistakes to actively anticipate our next choices.

There are, of course, dangers to such artificially intelligent systems: By encouraging us to embrace familiar patterns, they may strip away some of the generative weirdness that makes the best literature truly novel. Run the fractal dream-discourse of Finnegan’s Wake through a spell-checker, for example, and you’ll probably just end up more confused. For now, at least, that’s fine: As in other endeavors, A.I. may function as a co-worker, but it’s unlikely to really equal humans any time soon.