Twitter has come in for a lot of criticism lately--including in this column--for its lackluster response to abuse, hate speech, and threats, especially to its female users. CEO Dick Costolo has acknowledged the validity of these criticisms, admitting in an internal memo, "We suck at dealing with abuse and trolls on the platform and we've sucked at it for years."
Now it looks like the service may be about to suck a lot less. Although Twitter has made no public announcement, several verified Twitter users reported early this week that they were prompted with an option to turn on a "quality filter" on their notifications timeline. What does a quality filter do? "Quality filtering aims to remove all Tweets from your notifications timeline that contain threats, offensive or abusive language, duplicate content, or are sent from suspicious accounts," the prompt explains.
In the absence of more detail from Twitter, it's impossible to know just how quality filtering will weed out abusive content and suspicious accounts--and the service may not want to tell the world exactly what its algorithms are looking for because that will make them easier to circumvent.
On top of that, quality filtering is only available to verified users using iOS devices. Android- or PC-using celebrities are out of luck. I'm going to guess that Twitter will eventually extend quality filtering to more devices, but will it also be offered to non-verified accounts? I'm hoping the answer is yes. Right now, verified users can "tailor" their notifications, and Twitter has said the quality filter is simply a refinement of that tool. But Twitter's response to abuse (or lack thereof) has been getting a lot of attention, so the company would be wise to extend these tools to everyone when it can.
Still, this is likely to be a positive advancement that will improve many people's Twitter experience dramatically. Here's why:
1. Twitter is doing something about the worst abuse.
The company announced about a month ago that it had tripled the staff devoted to handling abuse. That's a good sign. In an even better sign, it recently made reporting abuse easier, cutting down a lengthy questionnaire, and--significantly--lifted its ban on bystanders reporting abuse. That's important because, a Twitter exec told The Verge, Twitter algorithms can use "behavioral signals," including large numbers of bystander abuse reports to identify the most egregious abuse.
2. Twitter seems to be taking on its worst abusers.
The conundrum has always been that the ease of creating a Twitter account--all you need is an email address--has made it too easy for abusers who've been banned from the service to simply come back again (and again, and again) with different handles. On the other hand, requiring Twitter users to verify their identities would make it harder for dissidents in places with repressive governments to get their messages out. So Twitter is trying something else: The company recently announced it would begin asking for telephone numbers from some Twitter users. If someone has received a temporary ban for abusive tweeting, that user may be required to provide a phone number and if it matches a blacklist of abusive numbers, the user can be banned permanently.
This leaves open a lot of questions, such as, will Twitter actually verify that a phone number provided is legit (to avoid having trolls simply make one up)? And what's to prevent a banned user from simply creating a new account? It's far from a perfect solution but it is a sign that Twitter takes trolls seriously enough to do something about them.
3. It may block some accounts before they even get going.
The suggestion that quality filtering will block out "suspicious accounts" is especially heartening. One prominent woman reported being abuse by a serial troll who seemed to have a bot set up to automatically generate new Twitter accounts--most of which had random characters as their handles. As quickly as she blocked one, a new one would appear. She praised third-party tools that allow Twitter users to automatically block newly created accounts and/or accounts with few or no followers, both of which cut down the abusive tweets she had to read dramatically. I'm not sure how Twitter defines a "suspicious account" but I bet it uses similar criteria and if so, that's a good thing.
4. Abuse victims are no longer responsible for stopping their abusers.
To me, this is the best piece of news about the new tool, and the most encouraging development. It's also a sea change in Twitter's attitude. The old Twitter put all the onus for solving abuse problems onto those who'd received the abuse. They (and only they) had to fill out a nine-point questionnaire to block each abuser. If someone else tried to do it for them, the third party was told to urge the abuse victim to report the abuse him or herself.
By allowing third-party reporting, along with quality filtering, Twitter acknowledges that when people get abused on its service, it's not only their problem, it's Twitter's problem. And it's in Twitter's interest to take proactive measures to stop it.
5. It allows people to avoid seeing abusive messages in the first place.
This is something many abuse-receiving Twitter users have been clamoring for. You can report someone and block their account after they've threatened you with decapitation or impersonated your dead father, but at that point, you've already seen those tweets and the damage is done. So blocking abusive tweets before the victim ever sees them is a great development.
One reason to worry
And that's the only thing that worries me, just a little, about the new tools. If someone is threatening to kill you on Twitter, you probably don't want to know about it--unless there's any chance at all they will actually carry out that threat. There are lots of instances of prominent women (such as Brianna Wu) moving out of their homes and going into hiding precisely because they do take such threats seriously. What would happen if they never saw them?
And wouldn't you want to know if you got "doxed" (i.e., your personal information such as a physical address published on Twitter)? The service recently added a tool for reporting doxing, something it should have offered all along. I want to know that Twitter is doing more--that it's actually working with law enforcement and alerting abuse victims when tweets portend a genuine threat to their safety.
I hope that they are doing this, or putting systems in place to start soon. It would be a smart move. All the criticism Twitter has received thus far for not dealing with abusers will look like nothing by comparison if one day one of these trolls actually carries through on a threat.