Future Tense

Researchers Create Artificial Intelligence To Flag Cyberbullying

Russian teenage girls work on laptop computers while wating for a public bus.

Russian teenage girls work on laptop computers while wating for a public bus.

Photo by YURI KADOBNOV/AFP/Getty Images

Technology is often accused of facilitating bullying among kids. But now some researchers have created an AI system that can recognize abusive language in user-posted text—and perhaps nip it in the bud.

The Software Agents Group of the MIT Media Lab has developed an algorithm that identifies certain groups of words within a post. It then assigns them to 30 themes pertaining to sensitive topics and possible harassment, according to the New Scientist.

The AI flags clusters of words in user-submitted text, displaying how likely the post is related to a certain theme. For example, the terms “cheat,” “trust,” and “upset” might cause the system to identify a comment as referring to a breakup.

“It could tell you how much of that story is talking about weight and appearance issues or the duration of a relationship,” said Karthik Dinakar, a member of the Software Agents Group and MIT Media Lab research assistant, told me.

What happens from there is dependent upon the service using the AI. The system could notify a moderator of the offending post, or go as far as to warn the user about the consequences of cyberbullying.

Dinakar explains that most content flagging systems rely on matched keywords, which tends to be far less accurate than thematic recognition. Additionally, humans monitoring comments can be overwhelmed by the massive number of flagged posts. Facebook, for instance, relies on user reporting when handling abusive content.

The Software Agents Group partnered with MTV’s A Thin Line, a website dedicated to combating bullying by encouraging teenagers to share their experiences. The algorithm analyzed 5,500 of these posts during its initial development.

Birago Jones, also a member of the Software Agents Group, explains the overall goal of this project is to provide social networks with better tools to combat youth-oriented cyberbullying. However, he clarifies that their efforts are “more inspiration that strict problem solving.” He believes it will lead to better discussion between victims, bullies, parents and social network moderators.

This AI is not without its limitations, however. The system requires initial human approval to determine if a theme or word cluster is actually offensive before it can flag subsequent posts. This means that there might be a period of time before it can recognize new abusive terminology. And teenagers are certainly nventive when it comes to creating new ways to hurt one another’s feelings.