Future Tense

The Abracadabra A.I.

Can artificial intelligence make magic more magical? And what does it mean if it can?

Illustration by Alex Eben Meyer

The capacity to be deceived and delighted by a magic trick seems like a really human thing, the kind that requires a beating heart and a brain wired in a certain way to enjoy. Logically, we know that the coin can’t simply disappear into thin air or the woman levitate, but magicians seem to be able to suspend the laws of physics in ways that, when done right, are just wondrous to behold. (Despite how adorable it is when dogs are dumbfounded by the disappearing-treat trick, they don’t derive the same enjoyment from being deceived. Neither do sheep.) And creating a magic trick that effectively deceives and delights—that feels pretty human, too.

Magicians typically craft and refine their tricks through trial and error, designing an experience optimized for maximum wonder. But could a machine do a better, faster job? And if so, what does that mean for other magic-making professions? (After all, journalists are already being replaced by robots.) In a paper that appeared in the November 2014 issue of Frontiers in Psychology, researchers from the computer science department of Queen Mary University of London examined precisely that—what happens when “human intelligence is replaced or assisted by machine intelligence” in “the creation and optimization of magic tricks.” So, can machines make magic?

“We kind of realized that some magic tricks—not all—are based on mathematical principals that can be easily modeled on a computer,” Howard Williams, a Ph.D. student at Queen Mary’s School of Electronic Engineering and Computer Science and co-creator of the project, told me. “And once we realized that, we started to think about optimization, and computers are really good at that.”

But anyone who’s seen a magic trick knows that its success is not just a matter of getting the math right; magicians also rely on an intuitive understanding of human psychology and deft manipulation of that psychology in creating illusions (as well as, of course, showmanship). “Doing a magic trick for a spectator is not a mathematical experience. … So we started to think about how people enjoyed that deception,” Williams continued. The team decided to take two existing magic tricks—a “vanishing” jigsaw puzzle in a style that was popular in the late 1800s, like these, and a mind-reading card trick—break them down into the mathematical and psychological elements that make them work, and, using artificial intelligence, rebuild them in hopefully more magical ways.

The researchers used a genetic algorithm, which is a problem-solving search heuristic that mimics natural selection while plowing through masses of data. Basically, this A.I. learns from its mistakes and does better next time. Genetic algorithms have been used before to optimize entertainment experiences, such as in a first-person shooter game, to design ever-more difficult enemies that adapt to the player’s weaknesses in real time. But this appears to be the first instance of a genetic algorithm used to improve magic tricks.

In the case of the jigsaw puzzle, it was about exploiting the geometric principles behind the puzzle’s structure and the psychological quirks that underpin its workings, such as the fact that people perceive vertical lines as longer than horizontal ones and that a line can change in length significantly before a viewer will notice. The result was a clever little puzzle called “The 12 Magicians of Osiris,” which, when first assembled, shows a series of 12 lines, but when reassembled in what seems to be the same way, only shows 10. Where the two lines went is down to a bit of geometric magic. The puzzle was also accompanied by a bit of window dressing, devised by the researchers, in the shape of a narrative about an Egyptian king and protective spells formed by the lines.

Here’s the A.I. version:

And here’s an old-fashioned version:

The mind-reading card trick is a standard “Is this your card?” act, but for the fact that it’s done using an Android app. The magic here is that the card—a real, physical card—is identified using the least number of questions possible, and, in a bit of neat tech, shows up on the phone’s screen.

Here’s a man-made card trick:

And here’s what the A.I. came up with:

Whether the tricks are actually “better” is debatable—Williams acknowledged that it was difficult to design an accurate metric to measure a spectator’s enjoyment of a trick. Truthfully, that’s probably the biggest problem with the paper. In rating the magical jigsaw, spectators were not actually shown other versions of the trick (and there are some beautiful, if a bit racist, versions of vanishing puzzles from the late 19th century that are more interesting). Rather, the researchers recorded their responses to different versions of the same magic-puzzle trick, including ones without a narrative explanation of what was happening, to rate elements of the magical experience. Their ratings were then compared with their ratings of several classic magic tricks, unrelated to the jigsaw puzzle, to get a sense, however vague, of how the A.I. version stood up; “clever,” “cool,” and “how?” were several of the more popular responses. Crucially, however, the question of whether this particular magic jigsaw is better than others was not answered.

But by another metric, the jigsaw puzzle was fairly well-received: The puzzle, which was created on a laser cutter by Williams, debuted at Davenports Magic in London, the oldest family-run magic shop in the world; there was enough interest that Davenports put it on sale for £19.99, or about $30, and has since sold out of the roughly 40 puzzles. I saw the last one at the shop. It was neat, but I think the explanation of what I was supposed to see required a little more sparkle—the description, which is an important part of selling a magic trick, felt a little flat. Patter matters. This certainly tallies with the researchers’ findings that magical experience was bolstered by narrative. A version of the puzzle was also archived at the library of the Magic Circle, that august international society of magicians.

The card trick app, Phoney, which is available for Android on the Google Play store, was shown off for spectators at a science festival and scored better than the magical jigsaw among viewers. However magical it was to the viewer—and, like a lot of card tricks, it was—it was disappointingly complex to the extreme-beginner would-be magician. (That would be me: I found the 11 pages of instructions that came with it to be too much.)

But really, whether or not the tricks are better is kind of beside the point. Being able to show a machine how to make magic tricks, however, requires a deeper understanding of not only why magic tricks work but also what makes them magical—and that’s the really interesting bit. Technologists, psychologists, scientists, and others are only recently waking up to the idea that magical thinking—the ways that magicians see and understand the world—could have important and unexpected applications outside of freeing rabbits from hats and lengths of knotted silk handkerchiefs from sleeves. Which means that despite the frisson of A.I. in this story, we’re actually circling back to figuring out what it means to be human.

Gustav Kuhn, a professor of psychology at Goldsmiths, University of London, who edited the A.I.-assisted magic trick paper and is a magician himself, believes magicians know a lot more about the brain than modern psychology gives them credit for. “Magic only really works if your intuition about how the brain works is correct,” said Kuhn, noting that magicians famously rely on misdirection for their illusions. “Misdirection is so powerful, it’s pretty much equivalent to you just closing your eyes. You think that you see a lot of the stuff around you, but you really just don’t.” And that, he says, has implications in areas as diverse as road safety and the reliability of eyewitness testimony in legal proceedings.

So for Williams, the outcome of his experiments isn’t just a better magic trick or even two. “What we’ve really done is optimize deception,” he said, and he thinks deception has an interesting, if somewhat counterintuitive, real-world application: informing the design of A.I. computational models that adapt to “remov[e] accidental deception.” “Accidental deception” would be places in a user interface that are unintentionally vague, or that may confuse the user; Williams gave the example of a user interface on a device used by medical staff in a hospital, a place where clarity of design could save crucial time. That’s just one possibility; the applications, he believes, are many, and the framework of A.I. algorithms married to data about human cognition should be flexible.

But in order for magical thinking to have more real-world applications, magicians themselves may need to be a bit more open—which could be tough for a field that relies on mystery. Bill Davenport, owner of Davenports and a magician himself, said, “There’s a flow of information from outside magic in, but quite often, there isn’t a flow of information out of magic.”

Still, it’s certainly happening: Open-source magicians like Marco Tempest, a fellow at MIT’s Media Lab, have been bringing magical thinking into other disciplines for the better part of a decade. Davenport recently held a talk for local sports coaches showing how magical thinking—misdirection, primarily—could help them on the field; part of the reason he agreed to work with Williams was because he finds the idea of applying magical thinking to problems outside of magic fascinating.

And anecdotally, at least, magicians aren’t feeling particularly threatened by A.I.-assisted magic. Davenport told me that most of the people who saw the jigsaw in the shop were primarily interested in its unique A.I. provenance, not the magic. “I wouldn’t say it’s groundbreaking from a magical perspective, but it was certainly interesting,” he said. And though Davenport didn’t discount the idea of robot magicians in the future, he noted, “I think people prefer to be fooled by a person. When they’re fooled by a computer, it takes away some of the magic.”

So robots may be edging other professions out of their jobs—though they’re not performing universally well, as the Guardian’s efforts to automate a news story showed recently—it doesn’t seem too likely that magicians are next. But that’s sort of not the point: As a creative effort in scientifically analyzing the fundamentals of magic, this is a pretty good one.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.