More bad news for Facebook this week. The tech giant, already under great scrutiny for selling $100,000 worth of ads to Russian groups intending to interfere with the 2016 election, is under further scrutiny for their ad placement algorithms.

Acting on a tip, the nonprofit investigative news organization, Propublica, learned that Facebook automatically generated related categories when people entered anti-semetic terms into the advertising category box. Anyone who included fields like, "jew hater" into their fields of study or employment could be targeted by these ads.

The advertisements in question went through no human interaction or inspection on Facebook's side, as they were purchased through the platform's self-service tool. Until now, tech companies have not acted to censor ads or comments with legitimate political or religious expression, maintaining that it is not their role. Beginning with the 2016 election, through the violent protests in Charlottesville, tech giants like Facebook are being forced to reconsider and redefine their role.

Facebook acted quickly to prevent such offensive entries in demographic traits from appearing as addressable categories. They have disabled the associated targeting fields in question from its advertising system until they can resolve the issue once and for all.

The audience identified by such queries is quite small, according to Facebook--approximately 2300 people. The existing audience size is not large enough to target for an add, but it's under 1000 people short. The categories can, however, be grouped into larger groups, allowing for a large enough audience. If the opportunity to target this self-proclaimed anti-Semitic audience weren't eliminated, I would guess that, sadly, these numbers would grow.

The company says they will continue to work on the ad targeting features to hopefully prevent the ads from getting approved by its system. I assume this will be no easy task, as it's got to be difficult for an algorithm to determine what is, or is not, a suitable phrase. For instance, the terms, "History of Jewish concentration camps," and "History of Jews polluting society" would be difficult to distinguish between without human intervention. Given that this particular audience is not the likely the last hateful crowd to be uncovered, it seems that human monitoring should also be put in place. But, I have not found anything indicating that this additional measure of protection is in the works.

Yesterday, Facebook added this update to their news page. In the update, Facebook says, "Keeping our community safe is critical to our mission. And to help ensure that targeting is not used for discriminatory purposes, we are removing these self-reported targeting fields until we have the right processes in place to help prevent this issue." They have also asked advertisers to report any inappropriate targeting fields directly to their help desk.

What's particularly interesting to me, is that Google AdWords also makes it easy for people to pay to play in this hateful arena. As of this writing, I found no criticism of Google for offering an equally offensive opportunity to hateful would-be advertisers.

Google's keyword planner generates keyword ideas and search volume data for long tails like, "how to burn Jews" and many others like, "gas the Jews." In response, Sridhar Ramaswamy, Google's Sr. VP of ads says, "Our goal is to prevent our keyword suggestions tool from making offensive suggestions, and to stop any offensive ads appearing. We have language that informs advertisers when their ads are offensive and therefore rejected. In this instance, ads didn't run against the vast majority of these keywords, but we didn't catch all these offensive suggestions. That's not good enough and we're not making excuses. We've already turned off these suggestions, and any ads that made it through, and will work harder to stop this from happening again."

Updated on 9/16 to include statement from Google.

Published on: Sep 15, 2017