Imagine your father was a household name, a beloved comedian familiar on movie and TV screens who inexplicably chose to take his own life. Now imagine receiving, two days later, several messages accusing you of being the person who caused his suicide (that's a highly sanitized version of the hateful tweets). Put yourself in her place and you can easily see why grief-stricken Zelda Williams tweeted to her 280,000 Twitter followers that she was leaving the service, at least for a while.
Faced with the media coverage that followed, Twitter announced it was "reviewing its policies." It's a familiar tune, one that Twitter sang a year ago, after a British feminist faced rape and death threats for proposing that Jane Austen should be featured on her nation's currency. Earlier this year, journalist Amanda Hess described her frustration after a friend alerted her that a Twitter account had apparently been created "for the purpose of making death threats to you." And just hours before Zelda Williams quit the service, African-American attorney and journalist Imani Gandy published a cogent indictment of Twitter's lackadaisical response to the barrage of hate speech she and others have endured.
At the heart of the problem is Twitter's size--with more than 650 million active accounts and 130,000 new ones daily it almost rivals Facebook in heft--coupled with the fact that it is so damned easy to create an account on Twitter. You don't need any kind of real identity, as you do on Facebook and LinkedIn. You do need to enter an email address, but it could easily be a fake one.
That's the feature that's made life hell for Gandy: One particularly persistent Twitter hater creates some 10 accounts a day to lob racist invectives at her; as soon as one is shut down he moves on to the next. In fact, he seems to be using automation to create them, and most are a random collection of letters. Officially, this is against Twitter's policy, but if the service is doing anything at all to enforce that policy, it's not saying. And whatever it is or isn't doing seems to be having zero effect. Reporting abusive accounts (as Gandy's done more than a thousand times) isn't much help under the circumstances. Yet it's the only remedy Twitter offers.
What more could Twitter do? Quite a lot. Developers have blogged about some very easy fixes that could radically reduce harassment and a couple have actually stepped up and released an app called Block Together that has already made Gandy's life on Twitter dramatically better. Rather than continually "reviewing its policies," here are some simple steps Twitter could take right now that would protect users from harassment:
1. Allow users to share blocked accounts.
This is one thing that makes Block Together so helpful, and it's the exact same principle your antivirus software uses when it downloads updates. Blocking an account that's been blocked by another user (or some number of users) would shut down harassers much more quickly.
2. Allow users to block new accounts.
That's Block Together's second weapon: It allows users to block messages from accounts less than seven days old. That would at least delay harassers like Gandy's who constantly create new accounts for abusive purposes.
3. Allow users to block accounts with few followers.
Yes, a harasser creating illicit accounts could have them all follow each other, but again this would add delay. And it would be easy enough to weed accounts if most of their followers have been blocked.
4. Allow users to block specific words.
This is such a no-brainer I can't understand why neither Twitter nor any third party offers it. Gandy could vastly reduce the abuse she has to read by blocking any tweet with the word "nigger" in it. Email spam filters do this kind of thing as a matter of routine. If Zelda Williams had a filter blocking the word "bitch" she might never have seen the offensive tweets directed at her.
5. Add a humanity check to the sign-up process.
CAPTCHA and other services like it figured out how to distinguish people from algorithms years ago. This one step--required on countless websites--would put the account-creating bots out of business instantly.
6. Require a verified email address.
At the very least, Twitter could require users to respond to an email or enter a verification code before they're allowed to tweet from new accounts. This would make the business of rapidly creating new accounts for the purpose of harassing people somewhat harder.
7. Add some diversity to management.
Gandy and others have accused Twitter of being deaf to hate speech since it's rarely directed at the white males who make up almost all its management. Whether that charge is valid or not, Twitter got a diversity black eye when it announced its original all-white-male board of directors. It later added a single white woman to the board. Twitter is quick to point out that other tech giants are just as un-diverse, but that's beside the point. Ironically, Twitter has a larger proportion of black users than most social networks--a fact it's hoping to exploit for ad sales. And yet, hate speech and harassment seem to find a home more easily on Twitter than on other platforms. I'm not saying a more diverse executive team would automatically put a stop to this problem, but it would show that diversity is top of mind for the company.
Hey Twitter, I love you and use you daily, but any of these steps would make you a more welcoming place for everyone. What do you say?
Like this post? Sign upherefor Minda's weekly email, and you'll never miss her columns.