Humans and machines can make beautiful music together.

Humans and Machines Can Make Beautiful Music Together

Humans and Machines Can Make Beautiful Music Together

The citizen’s guide to the future.
July 11 2017 7:30 AM
FROM SLATE, NEW AMERICA, AND ASU

Humans and Machines Making Beautiful Music Together

Why we should encourage computer-generated creativity.

542308832

Photo illustration by Slate. Images by HACK_CG and ajma_pl/iStock.

The human-machine dynamic is often framed in terms of conflict. At least that’s how it feels these days. We’re fighting the machines everywhere—certainly in the movies, but also on the Go board, on the Jeopardy! stage, and in the workplace. You can’t even make a travel reservation or read your email without wondering, Was that sent by a bot or a real person?

Enter the Turing test, the most famous human-machine gotcha game. In it, a human at a keyboard engages in what is effectively a chat with someone or something. The “test” is whether the human can figure out what’s on the other end—person or program? If a program scams its interlocutor into believing it is a human, then we are supposed to agree that the machine running it has intelligence. Consciousness, here we come!

Advertisement

Well, not quite. For all that machines can accomplish, creativity has often been held out as the unconquerable boundary of the human-machine divide. We want to figure out whether that is actually true, though. That’s why in 2016, we initiated the Turing Tests in the Creative Arts, a collection of arts-based challenges in the creative potential of machines, sponsored by the Neukom Institute for Computational Science at our home institution of Dartmouth College, which is also where the term artificial intelligence was coined in the 1950s. Specifically, we were interested in exploring to what extent machines could create artwork in specific contexts indistinguishable from human-generated work. PoetiX asked for systems able to produce “humanlike” sonnets in response to a noun phrase prompt. DigiLit asked for humanlike short stories. Our Algorhythms (sorry!) challenge was to mix a dance set in such a way that was indistinguishable from the set delivered by a human DJ. Results were interesting and mixed: Judges picked out the machine sonnets and stories as marked by technical correctness—but “not being about anything.” However, some of the dance sets did indeed pass as human!

But perhaps “Can machines produce art that passes as human?” is the wrong kind of question. The testing context makes Turing’s challenge feel adversarial—as though the programmer (let’s not forget that deep down an actual person or group of people wrote the program) has prepared to engage in a battle of wits with the human. But a more useful and interesting framing might be one of collaboration, which is arguably closer to the conversational dynamic. In a conversation we “dance around topics,” we “meet someone where he is,” we “draw someone out.” We engage in a good “back and forth.” A skilled conversationalist responds as well as initiates and even knows when to be quiet. In the 1931 version of Frankenstein, Frankenstein’s monster passed a Turing test with a blind man by grunting through a conversation. In short, conversation is not (generally) rhetorical target practice, but rather a collaborative creative act. Passing the Turing test means that a machine can be a partner with a human in that collaboration—and that’s what we take as a sign of intelligence. The collaborative framing is increasingly important in our human-machine future, especially in the context of the workplace. The fact is that both species (!) bring something to the table, and one of the most important questions we (humans) face is how we can work productively and rewardingly with, as opposed to instead of, machines.

So, in this year’s Turing Tests in the Creative Arts, we asked contestants to take on the challenge of human-level collaboration.

Our AccompaniX contest solicited programs to generate an expressive musical accompaniment to a human performance of a given melody. DanceX challenged participants to create an animated dance figure to accompany a motioned-captured human dance performance. Entrants were given only 72 hours to respond to a test piece that was released for competition purposes. The algorithms needed to effectively accompany the human performer in real time (while accounting for the latency of a computation-intensive algorithm). We received four entries for these very challenging tasks—three for AccompaniX, and one for DanceX. We also called once again for machine sonnet and short-story generation. (For the latter, we modified our original form of the challenge and asked for systems that could complete a short story rather than completely generate one from scratch.) Further details can be found at the competition web page.

Advertisement

To determine the winners, 60 judges—who responded to a call-for-participation in computer music and computer graphics mailing lists—participated in a blind online survey that rated duet performances using a scale (1 was bad, 5 was excellent), according to three criteria: musicality, interactivity, and naturalness. In each case, a submission created by a human was hidden among the computer-generated entries. Human accompaniments scored about a 3.5 (out of 5) on each of the criteria, meaning that the machine entries would have to beat that threshold to pass the Turing test.

We announced the results on June 20 in Atlanta at the Music Metacreation workshop, part of the International Conference on Computational Creativity. The single DanceX entry was interesting but not convincing, achieving an average score of 1.8 (out of 5) for the machine-generated animation, which was compared with a score of 3.3 for the human motion-captured animation. You can compare the two animations here.

AccompaniX, however, did have a winner: Christopher Raphael’s “Music Plus One,” which scored 3.6, slightly above that of the human accompaniment (3.5).  Chris is a professor of informatics and adjunct professor of cognitive science at Indiana University. He’s a recognized leader in the fields of music information processing and artificial intelligence, as well as an accomplished oboist. Listen to his winning expressive accompaniment to the Irish folk song “Wild Geese” in the player below. (If you don’t see an audio player, click here.)

The two entries for PoetiX this year were again very interesting but not yet human level. Two separate panels of 18 and 12 judges respectively (culled from our mailing lists and local Neukom Institute community) were each presented with distinct corpora of human and machine sonnets generated from both machine entries. This time, judges were asked for a degree of confidence in ascribing human or machine origins to the sonnets. Here is the output generated by the first prize-winning submission from Charese Smiley and Hiroko Bretz of Thomson Reuters using the prompt “gate”:

Advertisement

And be very careful crossing the streets.
How fair an entrance breaks the way to love!
Left, doors leading into the apartments.
Just then a light flashed from the cliff above.

The fields near the house were invisible.
Objects of alarm were near and around.
The window had only stuck a little.
From the big apple tree down near the pond.

The large cabin was in total darkness.
Come marching up the eastern hill afar.
When is the clock on the stairs dangerous?
Everything seemed so near and yet so far.

Behind the wall silence alone replied.
Was, then, even the staircase occupied?

Advertisement

The DigiLit challenge was perhaps too challenging—no one entered. A more detailed description of the submissions can be found on the contest results webpage.

Of course, the goal of these kinds of challenges is not to replace the human producers of these art forms. The creative collaborative aspect of these Turing tests orients the challenges in a more optimistic direction. In fact, one of the guiding principles in the computational creativity community is that we work to enhance, not diminish, human creativity. We also believe that machine-generated art should be judged on its own terms. As per the results of the test, some artist-programmers create machine accompanists that are humanlike, and others don’t.

Make no bones about it: Our future will have machines in it. That isn’t a bad thing. After all, they were in our past, too, and one way to view human evolution is as a co-evolution along with machines in all contexts, from arrowheads to automata, from the Rhind Papyrus to the Perceptron. The genie was let out of the bottle long ago, and arguably, the stability of society depends on our finding ways to work creatively with machines while simultaneously remaining vigilant as well as open to their potential.   Christopher Raphael’s work shows that it’s possible that we can make beautiful music together. Let’s hope that’s true in other contexts, too.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.

One more thing

You depend on Slate for sharp, distinctive coverage of the latest developments in politics and culture. Now we need to ask for your support.

Our work is more urgent than ever and is reaching more readers—but online advertising revenues don’t fully cover our costs, and we don’t have print subscribers to help keep us afloat. So we need your help.

If you think Slate’s work matters, become a Slate Plus member. You’ll get exclusive members-only content and a suite of great benefits—and you’ll help secure Slate’s future.

Join Slate Plus

Dan Rockmore is professor of mathematics and computer science at Dartmouth, where he directs the Neukom Institute for Computational Science. He is also a member of the external faculty of the Santa Fe Institute.

Michael Casey is professor of music and computer science at Dartmouth. He writes music, creates audio software, and conducts research on music and the brain.