Hate speech on social media is not going away anytime soon. The trolls who hunt down unsuspecting users on Facebook and Twitter, the cruel and obnoxious comments, the anger and abuse--it's still running rampant, and Big Tech is standing idle.
Now we know why that is.
At a Senate hearing yesterday and today, Mark Zuckerberg noted how the artificial intelligence they use has worked okay at hunting down terrorists on Facebook, and they certainly have an army of people who are trying to combat the problem--around 200. The machine learning uses image recognition mostly, which is quite advanced. Yet, he says AI can't really do much about hate speech, since it is so "nuanced" as he said.
He claimed the AI routines required to understand human language, knowing the difference between the friendly jabs we do with coworkers versus the awful and derogatory comments meant to abuse others and make them feel terrible, are at least 4-5 years away from actually working. That's a big problem for anyone who has been on the receiving end of the abuse. Whenever a tech CEO says "nuanced" take note.
Apart from the usual death threats over a comment in an article, and the typical troll-like jabs on Twitter, I've never had too much of a problem dealing with online abuse. And, I've never been the victim of actual hate speech. I've talked to many people, including some family members, who have decided to delete their Twitter account and spend less and less time on Facebook, mostly because of the comments section.
Abuse runs rampant. There's name-calling, harsh commentary, threats of violence--you name it. In just checking my own feed just now on Facebook, one discussion erupted quickly and started to spiral out of control after someone posted a link about the refugee crisis. In another, it was about gun control. In a third, it was about Donald Trump. Sometimes, the hate speech is so severe I wonder why it's even allowed.
My concern is that Big Tech has not taken on this problem. I understand the challenges with AI. With self-driving cars, a few major accidents of late have pushed that technology back to what might even be the starting gate, especially when it comes to consumer trust. Alexa and other bots keep improving radically. But we've seen how hate speech and online abuse has taken over on social media. It is not a new problem, and there is a treasure trove of richly detailed insults to use for building an AI engine.
My guess is that it is not a priority. It seems as though the tech industry is focused on automations, attracting new customers, making cool widgets--but hate speech is a side project of a side project. If you had several hundred people feeding data into an AI engine to try and understand what constitutes online abuse, and you took the issue seriously enough to ban users who engage in these techniques, and you labored over the problem enough that you could tell what actually causes harm on Facebook (real emotional harm, the kind that exists in the real world), then we'd be closer. Much closer.
The data leaks are a major issue, but it's also a distraction from the real problem on social media. Will any of the Big Tech companies start to figure out what to do?
