Twitter is public by default, and sometimes anonymous strangers say vile things to each other. The company would prefer to curb that behavior, since it repels more wholesome users, but every effort the 11-year-old company has made has fallen flat. The latest try looks promising on the surface -- but Twitter's history of subpar moderation takes away the credibility.

On Tuesday, Wired reported that Twitter is once again going to revise its terms of service with the goal of cutting down on harassment and abuse. Wired obtained an internal email written by the director of Twitter's trust and safety department, John Starr, which listed a handful of specific reforms. Starr wrote, "[W]e have to do a better job explaining our policies and setting expectations for acceptable behavior on our service." Twitter confirmed that the email accurately described its plans.

The company is responding to the outrage prompted when it suspended Rose McGowan, an actress who spoke out against Harvey Weinstein and the men in Hollywood whom she sees as his enablers. The reason why one of McGowan's tweets was removed, and her account reprimanded, was that she included a private phone number. So-called "doxxing" along these lines violates Twitter's terms of service.

Some of the reforms that Twitter plans to enact are good ideas. Others miss the mark. The overall approach of simply banning more types of content is flawed, because Twitter's essential problem isn't that its terms of service permit abuse -- for the most part, user-on-user aggression isn't allowed. The problem is that Twitter enforces its existing policies unevenly or not at all.

In CEO Jack Dorsey's tweetstorm responding to the outcry on Rose McGowan's behalf, and in Starr's letter, there's no indication that the company plans to hire and train more humans to provide oversight. The flow of tweets is gigantic and never-ending, making it expensive to review every user report at all, let alone in depth.

It's concerning that Twitter is adding more editorial judgment calls to its purview without addressing how well-considered those judgment calls will be, or who will be making them. Tech companies typically want to offload as much content moderation as possible onto algorithms and other programmatic tools, but that's just not possible with the level of nuance and cultural awareness involved.

At least one policy change does make sense. Twitter is expanding its crackdown on "non-consensual nudity," colloquially known as revenge porn, and broadening its definition to include "upskirt imagery," "creep shots," and "hidden camera content." Going forward, Starr's letter said, "We will immediately and permanently suspend any account we identify as the original poster/source of non-consensual nudity and/or if a user makes it clear they are intentionally posting said content to harass their target." Twitter also intends to more carefully examine sexual exchanges between users to ensure that everyone is a willing participant.

Starr acknowledged that some pornographic material, which it has generally permitted in the past, may be affected by this change. Again, moderating according to these guidelines will require more sensitive, astute interpretation, which has seemingly never been an attribute of Twitter's enforcement.

In a more controversial move, Twitter will begin flagging "[h]ate symbols and imagery," although the company admits that precisely what this means has yet to be determined. "We are still defining the exact scope of what will be covered by this policy," Starr wrote in his email. "At a high level, hateful imagery, hate symbols, etc will now be considered sensitive media (similar to how we handle and enforce adult content and graphic violence). More details to come." So much for being the free-speech wing of the free-speech party.

Related to that, Starr announced that "we will take enforcement action against organizations that use/have historically used violence as a means to advance their cause." This ridiculously broad definition includes Antifa, the KKK, and every branch of the United States Armed Forces. But don't worry: "More details to come here as well (including insight into the factors we will consider to identify such groups)."

Again, has Twitter shown that it's fit to decide which groups that "have historically used violence" are harmful to have on its platform? Of course, the company is well within its legal rights to make those judgments, but if the policy is enforced evenly, users of every political persuasion will be outraged. If the policy isn't enforced evenly ... well, that's business as usual.

Starr summed up by writing, "We realize that a more aggressive policy and enforcement approach will result in the removal of more content from our service. We are comfortable making this decision, assuming that we will only be removing abusive content that violates our rules. To help ensure this is the case, our product and operational teams will be investing heavily in improving our appeals process and turnaround times for their reviews."

Frankly, without specific numbers, it's impossible for most to take this claim seriously. Twitter needs to put its money where its mouth is and produce actual results -- fair, well-considered results -- to restore its credibility when it comes to promises.