Task No. 1 for any new social media service is to get people to share content. Task No. 2 is to stop them from sharing the wrong content.

Ever since it solved the first challenge, Facebook has been grappling with the second, playing Whac-a-Mole with everything from child pornography and bullying to terrorist propaganda and illegal gun sales, all while attempting not to discourage legitimate sharing. In the months since the 2016 presidential election, however, a perpetual problem has escalated into an acute crisis, one that has put the $500 billion company squarely in the crosshairs of Congress.

On Tuesday, Facebook said accounts believed to be part of a Russian intelligence operation managed to get their ads in front of an estimated 10 million Americans. The ads, which Facebook has turned over to Congressional investigators, "focus on divisive social and political messages across the ideological spectrum, touching on topics from LGBT matters to race issues to immigration to gun rights," according to Elliott Schrage, Facebook's head of policy and communications.

The admission caps a lengthy run of damaging news for Facebook, which began shortly after the election with reports establishing the prevalence of false but highly viral "fake news" stories. The overwhelming majority of these appeared designed to boost Donald Trump and harm Hillary Clinton and a large subset originated in Eastern Europe, where Russia has been carrying out "influence operations" with increasing frequency. More recently have come revelations that Facebook enabled ads to be targeted to anti-Semites; that it sold at least $100,000 worth of advertising to accounts traced to a Russian "troll farm"; and that at least some of those ads were geo-targeted to users in key swing states.

On the face of it, foreign-sponsored political messages that pass themselves off as the product of American activists represent just one more type of abuse Facebook can attempt to police, now that it's aware of the problem. The company says it will hire 1,000 people to vet paid political posts and will make it easier for users to trace messages to their sources by requiring all ads to be permanently visible to the public.

But this represents a greater challenge than anything Facebook has yet confronted, and one that may be impossible without a radical change in tactics. From the beginning, Facebook has preferred to rely on two tools to flag content that violated its policies: community reporting and filtering software. That approach has let the company grow its user base to 2 billion while employing a relatively tiny pool of human content moderators.

Users can be relied to on report posts that threaten or offend them, but when it comes to fake news or propaganda memes that prey on their political biases, they've proven only too eager to act as vectors for malicious content. Software, meanwhile, has significant limitations. Artificial intelligence algorithms get better all the time at detecting certain types of content, whether it's nude body parts or copyright-infringing soundtracks. Where they fall short is when it comes to understanding the context that gives meaning to content.

This is where human review comes in. A human moderator can't crawl thousands of posts in a fraction of a second, but he can tell at a glance whether an image of a topless woman is a new parent's breastfeeding photo or a piece of raunchy pornography, or whether "I'm going to kill you" is a friend's playful exclamation or a stalker's threat. It's all about context. When Facebook earns unwanted attention for censoring legitimate content, it's often because its systems flagged something based on certain features without considering the content - mistaking, say, an iconic piece of anti-war photojournalism for child pornography.

By hiring another batch of moderators, Facebook is suggesting stopping Russian info-ops is a matter of supplementing its algorithms with a slightly larger dose of human intelligence. But the messages pushed by Russian actors during and after the election aren't like the examples listed above. There's nothing obvious or cut-and-dried about them. When a page called Defend the 2nd promoted posts showing a young woman talking about her right to bear arms, they looked indistinguishable from messages issuing from any number of legitimate American sources. Indeed, it's not even clear how they might have violated Facebook's existing policies. (Foreign nations are prohibited from running political ads in the U.S., but Facebook itself has argued those rules shouldn't apply to social networks, and the Russian ads were carefully crafted to focus on divisive issues rather than candidates or parties.)

The fact that it took Facebook's security researchers some 10 months of detective work to figure out which accounts on its platform were Russian cut-outs gives some sense of the difficulty involved here. Until now, keeping abusive content off its network was a simple cost-benefit analysis. Facebook could rely on humans and software each to do what they're best at and the result was good enough without being cost-prohibitive. Unless those thousand new hires are all trained CIA analysts, Facebook is going to need some new tricks.

Published on: Oct 5, 2017