Machine learning hobbyist Janelle Shane talks about her wacky neural nets.

A Scientist Tried to Teach Her Computer to Tell Knock-Knock Jokes. Things Got Weird.

A Scientist Tried to Teach Her Computer to Tell Knock-Knock Jokes. Things Got Weird.

The citizen’s guide to the future.
Aug. 9 2017 9:00 AM
FROM SLATE, NEW AMERICA, AND ASU

Out of the Loop

Machine learning hobbyist Janelle Shane talks about artificial intelligence, algorithmic biases, and why it’s funny when computers mess up.

Photo illustration by Natalie Matthews-Ramo. Photos by esvetleishaya/Thinkstock and Thinkstock.
When Janelle Shane made a neural network to come up with knock-knock jokes, it became fixated on telling jokes about cows with no lips.

Photo illustration by Natalie Matthews-Ramo. Photos by esvetleishaya/Thinkstock and Thinkstock.

Like many of the other terms that crop up in conversations about artificial intelligence, neural network, which refers to code designed to work like a brain, can be conceptually intimidating. Janelle Shane, however, makes the kind of neural networks that go viral. Her quirky creations autonomously stumble and grumble as they attempt to come up with names of Star Wars character, pick-up lines, and even recipes. Shane rightly warns that you should try the output of that last algorithm “at your own risk,” though there’s little danger that any human would attempt to: The network’s recipe for Beothurtreed Tuna Pie, for example, includes such bafflingly unappetizing ingredients as “1 hard cooked apple mayonnaise” and “5 cup lumps; thinly sliced.”

Shane—an industrial research scientist with a background in laser science, electrical engineering, and physics—describes herself as a hobbyist when it comes to machine learning. She thinks of her work in the field as a form of “art and writing.” Nevertheless, the output of her networks is typically silly and charming in equal measure, partly because it often fails spectacularly. Over at Ars Technica in May, Annalee Newitz discussed Shane’s attempt to make a neural network that that could invent paint colors. Here at Slate, I’ve discussed Shane’s attempt to make a computer come up with Dungeons and Dragons spells.

Advertisement

Recently, Shane came back to my attention when she tweeted about a network that she was trying to train to tell knock-knock jokes, only to have it go hilariously wrong. She had appended the hashtag #shutdowntheAI, a mostly jokey label that programmers often use to tell stories of software that comes out stupider than its creators had hoped.

I called Shane to talk about her efforts. She discussed what’s going on under the hood, what her creations might teach us, and why it’s so funny when neural networks go bad.

Can you tell me a little about what’s going on under the hood with these systems?

In traditional programming, you’ve got a human programmer that’s telling the computer rules about data. So, for example, if you were teaching a computer to write knock-knock jokes, you might tell it, You must always start to with “knock-knock.” This must then be followed by “who’s there.” These are the words you can change. Here’s a list of words you can choose from.

Advertisement

When you get into machine learning, when you get into neural networks, it’s the computer that’s making the rules. The neural network gets a big data set—say, a couple thousand knock-knock jokes—and looks at this data set over and over again. It makes its own predictions, makes rules. It will figure out for itself that “knock-knock” comes first, followed by “who’s there?” And it will come up with its own rules for what valid things could be found on the other side of that door.

It’s kind of cool. The neural network is loosely modeled on a human brain, and its method of learning reminds you of how a human brain works. When you’re teaching it text, in particular, you’ll start with a phase that looks a lot like baby talk. And then you’ll get simple words. And then maybe more complex words, then you’ll get longer phrases.

In one recent example, your knock-knock system devolved into an obsession with the cow with no lips joke. What was going on there?

It’s hard to say, exactly. It’s tough to look under the hood of a neural network and figure out what its rules are. This was fairly early on in its learning. It had learned a rule about how to make cows with no lips jokes. This was one of the best rules it had come up with yet. This is wild speculation.

Advertisement

Did you really shut it down?

I did not end up having to shut down the AI in that case. The phase did not last very long. It quickly learned better rules and started applying those. Although occasionally you would still get a few too many double and triple O’s showing up. The effects of the cow with no lips rule, whatever combination of neurons had produced that, didn’t entirely go away. The neural network used that rule more than I had expected.

When you encounter something like that, what do you do to train it out of that fixation?

That’s the thing about neural networks, most of them—especially the ones I do—there’s no human in the loop. It’s not human-supervised learning. I can shout at the computer all I want, but it’s not going to listen to me, because I’m not part of its input. Its only input is this list of knock-knock jokes. So I have to sit there and hope it figures out for itself that not every joke is the cow with no lips.

Advertisement

Sometimes it figures that out and moves onto something better. And sometimes it gets stuck and I have to start over again. Maybe with a different random seed, maybe with a different size and shape of neural network and hope it does better next time.

Is that just a question of giving it a better data set to work with?

That’s one thing you can do. For example, when I was training a neural network to write Harry Potter fan fiction, quite a few of the stories in the set were not in English. It ended up confusing the neural network. It would be doing really well: Snapes and Malfoys would be walking around. Then it would devolve into nonsense words.

If I were to do it again, I would filter it by language.

Advertisement

What are you trying to achieve with these algorithms? Are they just larks, or can we learn something from them?

The primary purpose, really, is pure entertainment. You can learn a few things about your data set by looking at what your neural network is coming up with. People have done some nice work with neural networks that function at the word level, choosing which word, rather than which letter, to put next. They can, for example, see that the words doctor and nurse are gendered in the internal representations that this neural network picked up.

There’s a practical use, too. I posted a neural network that can name craft beers. I’ve heard people say that we’re running out of craft beer names. Neural networks might be one solution to that.

Is there any pleasure in watching these neural networks go off the rails in goofy ways?

I would say that’s one of the greatest pleasures of training neural networks. It may be frustrating at times, if you’re trying to get something done. But I love it when things like that happen.

Why is that so delightful?

It is satisfying in the sense that you’re seeing computers aren’t good at everything yet.

There’s also some delight in seeing what new things it come up with that I wouldn’t come up with. On the #shutdowntheAI hashtag I saw a story where someone had trained a stick figure robot to walk. They encoded how all the joints were connected and that it had to cross the finish line, but they hadn’t encoded that the limbs had to stay connected. The solution that the stick figure had come up with was to disassemble itself in a hurry and reassemble itself into a tall tower. Then it fell over so that the head would cross the finish line.

We live in a moment when there are a lot of fears, some well-founded, some less so, about the “dangers” of advanced artificial intelligence. Does your work have anything to contribute to those debates?

Seeing the kind of biases that can get trained into a neural network illuminates one problem that we’re already having with artificial intelligence and machine learning. Computers are not inherently any better than us. They pick up on our biases, and they can learn biases that we didn’t even know we had.

That, I think, is going to be one of the most immediate detrimental effects of machine learning on our daily lives. If there’s an algorithm that decides whether to give somebody a mortgage, and it’s incorporating biases that we didn’t intend, that affects lives. Being able to recognize that these biases are there, being able to see the way in which they appear—that’s going to be important.

Could the quirkiness of your work help us grapple with the demands of algorithmic transparency?

Hopefully it will raise awareness in the general public about what neural networks are. Why are they good at some things they do? And why do we have to watch the way they do other things? Maybe they will inspire the next generation of people to get into machine learning work.

This interview has been edited and condensed for clarity.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.

One more thing

You depend on Slate for sharp, distinctive coverage of the latest developments in politics and culture. Now we need to ask for your support.

Our work is more urgent than ever and is reaching more readers—but online advertising revenues don’t fully cover our costs, and we don’t have print subscribers to help keep us afloat. So we need your help.

If you think Slate’s work matters, become a Slate Plus member. You’ll get exclusive members-only content and a suite of great benefits—and you’ll help secure Slate’s future.

Join Slate Plus