Come on, Twitter executives. Why do you just keep sitting around sipping Silicon Valley lattes and doing nothing about hate speech on your service? Enough is enough.

This week, after Roseanne Barr posted a hateful tweet that I won't link to directly or mention in detail, and then as the alt-right seized on the moment and started agreeing with her, I could only wonder how this is anything but a perfect example of what's wrong with social media.

ABC did the right thing and canceled her show. Barr later apologized profusely, but how do you even come up with a tweet like hers without having some underlying issues?

That said, I'm increasingly annoyed at Twitter specifically. I've experience plenty of hate speech and outright threats, and the problem doesn't seem to be going away at all.

First off, the AI is not as difficult as you might think. Twitter and Facebook have claimed in the past that it's almost impossible at this point to weed out hate speech, and that they need to make more advancements to understand the difference between, say, a joke and a jab. As humans, we pick up on these things pretty quickly, associating terms and spotting sarcasm as though we were born with an emotional intelligence detection system. (By the way, we are.) Bots have a harder time, but this is not rocket science. 

If Twitter had any AI at all, the company could easily have detected the word associations Barr made, and could easily have shown her a prompt with a warning about posting the tweet. (Supposedly, she posted at a late hour and wasn't thinking--maybe the AI would have helped her avoid the entire debacle.) But I'd want Twitter to go further than that. Thousands of people saw the tweet, obviously. I checked my own feed and it was there. Why? I understand all of the free speech issues here, what I'm talking about is pretty simple: If I don't want to see hate speech on Twitter, why does Twitter still show me hate speech?

Here's the reason: They don't know how to solve the problem. The AI is "too nuanced" as some have argued, but in the end it's not a Herculean task and not impossible. The company lacks the programming prowess to pull it off, and it is not a big enough priority.

But what if this specific tweet finally prompted people to start objecting? We're annoyed that Barr posted the hateful note direct at one specific person. Are we annoyed enough to start calling on Twitter to deal with the problem and start showing more warnings, and to start removing the tweets from our feeds if we decide we don't want to see them? And to protect other people from seeing them as well? Trolls can start trolling each other.

My issue is that we don't seem to have any control, even though Twitter claims there are settings to reduce how much hate speech you see. It doesn't work. It's not smart enough. The AI is not working. For social media to progress further, for it to become less of a cesspool of trolls and abuse, the everyday users--the ones who actually want to use social media for legitimate purposes and as a form of connection--have to start objecting more.

Will we?