Koch's Turkey Farms sits on 300 acres in a hilly part of Pennsylvania at the southern edge of the Pocono Mountains. For four generations, since 1939, the Koch family has been raising turkeys there. In the 1990s, the company became known as an industry pioneer for its humane practices and the clean diet that its turkeys eat. It sells around a million turkeys each year, 30 percent of them in the run-up to Thanksgiving. Like that of all farms, Koch's Turkey's business rises and falls largely according to forces outside of its control: the price of grain, trade conflicts, diseases that threaten its flock. But in 2015, something entirely different happened.

On Thanksgiving Day that year, a New Yorker named Alice Norton posted on an online cooking forum that her family had been severely poisoned after eating a turkey they'd purchased from Walmart. "My son Robert got in the hospital and he's still there!" she wrote, as chronicled by The Wall Street Journal. "I don't know what to do!" Throughout the holiday, thousands of tweets and posts on Twitter and other social networks shared similar accounts. Eventually, a news site called Proud to Be Black published an article that claimed 200 people were in "critical condition" in New York City--all from turkeys purchased at Walmart that came from Koch's Turkey. The article cited the NYPD as its source. A Wikipedia page about the outbreak popped up. The next day, the USDA received a complaint about the episode.

Brock Stein, the president and CEO of Koch's Turkey, was with his family, celebrating Thanksgiving, when he got a Twitter alert that the company's birds were sickening people in the Bronx. "Since we sell through distributors, sometimes our product can end up in places we don't know about," he says. The company began a massive internal food-safety review.

In time, the world caught up to what Stein learned: The whole thing was a hoax. Many tweets, the Journal later reported, originated from accounts controlled by the Internet Research Agency, the Russian troll farm linked to Vladimir Putin that's been indicted by Special Counsel Robert Mueller for interfering in the 2016 U.S. presidential election. Proud to Be Black, which was also eventually tracked back to the IRA, no longer exists. The USDA couldn't investigate, because the complainant's contact information was invalid, but officials in New York said there was no food-poisoning outbreak. Koch's Turkey doesn't even sell its turkeys at Walmart.

Koch's was like "a victim of a drive-by shooting," says John Kelly, the founder and CEO of a New York City data-science firm called Graphika, which mines social-media data for its clients. A co-author of a recent report on Russian propaganda tactics for the U.S. Senate Select Committee on Intelligence, Kelly has become one of the world's foremost experts on the subject. In 2015, he explains, the Russians were studying how effectively they could spread false information--"trying to freak people out, trying to figure out how much mileage they could get from a gallon of gas," says Kelly.

70 million
The number of fraudulent accounts removed by Twitter in just two months last year.
Facebook reports that in the first three months of 2018, it removed 538 million fake accounts from its platform.
As if that weren't grim enough, according to a recent MIT study, lies are 70 percent more likely than facts to be retweeted. And there are a whole lot of lies.

The Koch's Turkey controversy, in other words, was a test for bigger things to come in 2016. And it wasn't the only one. In 2014, social-media accounts shared news of a toxic explosion at a Louisiana chemical plant run by an Atlanta company called Columbian Chemicals. It too was entirely fake--there had been no explosion, no reason for panic--and it was determined to have involved the IRA.

Multiple investigations, including Graphika's work for the Senate, have established that a primary aim of Russian influence operations has been to divide Americans. Kelly calls it "weaponized polarization." The idea is to find flash points in American culture and magnify them. "What are the things that will get everyone's attention but get the left to point one way and the right another way?" says Kelly. "If you can get everyone's eyeballs on it, then you can use that attention to tear people apart even more." Political figures and issues are obvious subjects. But companies can become targets.

Texas Humor sells T-shirts, bumper stickers, and other novelties bearing clever Texas-themed designs. In 2016, founder Jay B. Sauceda saw a sudden rise of Facebook posts from a page called Heart of Texas that overlaid his original designs with fringe-right political statements. "They were all about Texas secession, or keeping Mexicans out," he says. "All this heavily xenophobic or racist stuff. We'd post an image, and almost immediately, like a day later, they'd put up theirs." Often, he says, Heart of Texas would keep Texas Humor's logo on the image, "so it looked like we were creating these messages with them." Sauceda sent several notes to Heart of Texas demanding it stop stealing his intellectual property and got a series of noncommittal responses that, he says, "were just kind of oddly off. The English was like someone knew what to say but didn't know quite how to say it." Eventually, after Sauceda reported the infractions to Facebook multiple times, the offending posts stopped. Then, last year, he got an email from Facebook telling him that he had been interacting with a page run by--yup--IRA, and that the Heart of Texas page had been removed. But before it had been scrubbed from the internet, that page had amassed about a quarter-million followers, and its content had been shared nearly five million times, more than all but two other pages created by the IRA, according to the Graphika report--meaning Texas Humor's content and logo had been compromised on a vast scale.

Stein believes it's possible Koch's was attacked mistakenly, because "we share a last name with another family that's very prominently involved in politics"--the powerful Republican donors Charles and David Koch, of Koch Industries. "If so, the point was to make it appear that the Kochs and the Waltons [of Walmart] were attacking African Americans in the Bronx." Such household names draw particular attention, of course. Social media erupted with calls to boycott Nike, for instance, when it made former NFL player Colin Kaepernick the face of a major ad campaign last year, after the quarterback had set off a nationwide debate about athletes taking a knee as a form of protest during the pregame national anthem. Graphika's analysis of that uproar found that the first sparks were made by a few thousand Trump supporters, but ultimately it became a roaring blaze fueled by an army of Russian fake social-media accounts. That's a common way that businesses can become ensnared in propaganda, Kelly says. "We have seen long-standing artificial boosting of boycott efforts against businesses, and some of them we know Russians were involved in."

It's not just companies that can fall prey, but entire industries as well. The debate over fracking, Kelly says, is one where legitimate anti-fracking activists had their efforts unwittingly boosted by Russian trolls. "Russia is a petro state," explains Kelly. "If they spot an American movement that seeks to curtail an industry that could devalue Russian petro products, they will help it along." The debate over GMOs is another, he adds. "Turning people against big Western food production is in Russia's interest."

Research has shown that fakery spreads faster and wider online than truth does. Studies have found that about half of the traffic on the internet is controlled by bots, many of which are racking up page views to rake in more ad dollars or gaming algorithms to spread content. Which gets to perhaps the more fundamental problem: Online fakery wasn't invented by evil mustache twirlers in the Kremlin; it was created and nurtured by marketers everywhere, to build businesses in a fast-changing digital world. Before social media existed, bloggers shilled products for cash. (Today that's called "influencer marketing.") Online marketplaces are rife with fake reviews and counterfeit products. Buying social media followers, including bots, is all but standard practice. "Inauthentic manipulation is as much a commercial problem as a political problem," says Kelly.

Companies like his are adept at spotting disinformation (mostly by mapping relationships between accounts pushing ideas online). Stopping it, much less preventing it, is another thing altogether. Like any negative PR situation, often the best thing to do is nothing, say experts. That's what Koch's Turkey did, besides reporting the false posts to the social networks and closely monitoring online conversations to make sure those posts didn't spread to its customers. (To Stein's relief, they didn't.) Major corporations with large budgets dealing with "disinformation defense" don't really have any better options: If the propaganda starts being shared by a company's core audience, the best defense still tends to involve a good PR message that drowns out the negative narrative.

More important in the big picture, argues Kelly, is a major effort to change the broader culture of online fakery. "There needs to be some kind of structured, coor­dinated effort by the online platforms, where they agree on a strict differentiation between what is authentic and what is not, and rule out the latter." That means not just large-scale Russian bot operations, but all fakery--in recommendation engines, review forums, clicks on ads, and so on. Or, as Kelly puts it, "we can't say that only certain people can't be fake." Some basic regulatory moves, such as changing the federal law that allows platforms not to be responsible for the content that appears on them, could begin to clear a way forward.

Until then, though, "the only option is to stand firm," says Stein, "and hope the storm passes you before it does too much damage." And do your part by taking a hard look at how your own company might be gaming the system. Cynical thinking, by everyone, got us here. The solution rests with every one of us too.


Defense Against the Dark Arts

The Disinformation Defenders: While Graphika does disin­­for­mation defense for some large clients, Austin-based New Knowledge specializes almost entirely in the practice--but was criticized for weaponizing social media during Alabama's 2017 special election. After spotting and mapping the spread of false information, these services can identify where such campaigns began and help affected companies provide the right information to law enforcement and the social-media platforms.

The Ambient Listeners: Most com­panies won't pay for those services on an ongoing basis, but a good alternative is to use one of the "social listening" applications that have arisen in recent years. Services like Mention and Keyhole allow you to track mentions of your company--or industry--across many social platforms. You won't get the same level of sleuthing, but you will spot bad behavior sooner, so you can consider your options before it's too late.