Here's a question: In a society built on certain freedoms, like freedom of expression, what do you do with information that is not only demonstrably false but also potentially harmful? It's not a rhetorical question. It's the very real challenge facing tech companies like Facebook, Twitter, and Google. That's because the platforms built by those companies are where the vast majority of us go to get our information. 

When that information is bad or unreliable, millions of people are at risk of making bad decisions based on false reports. Take the current coronavirus outbreak that the World Health Organization has labeled a global health emergency. There is no shortage of fake information being shared on social media. In fact, on Facebook, there are posts that encourage people to drink bleach to cure the virus. (Don't do it--it won't and will be very, very bad for you.) 

When free expression collides with harmful content, it presents a very real, and very difficult, challenge for these companies. And it leads us to another question: Do companies have a responsibility to take action to prevent the spread of bad or false information? That's a tricky question, since any action to limit bad information is still a limit on free expression. Still, the right to free speech isn't absolute.

The most popular example of the limit of free speech comes from Justice Oliver Wendell Holmes, in the case Schenck v. United States, in which he said, "The most stringent protection of free speech would not protect a man in falsely shouting fire in a theater and causing a panic." The key here is that false speech intended to incite panic is certainly not protected. The same is true for fraud. 

Which certainly seems like a valid reason to limit the spread of bad or false information about a potential global pandemic. The problem is that in a case like coronavirus, it's virtually impossible for a tech company like Facebook or Google to screen every source of information and determine whether it's true.

That's why Google has taken action to highlight reliable and verified information when users search for details about the outbreak. While there still may be plenty of misinformation in search results, the idea is that the best information is highlighted in a prominent way. Twitter has taken a similar approach in promoting information from the Centers for Disease Control when you search for the virus. 

Facebook, on the other hand,  has announced it will remove posts that are considered false or harmful. The company points to its policy of limiting content that can cause real-world harm as a reason for limiting misleading information in this case.

The problem is, while I'm all for minimizing the amount of harmful information, who decides what's accurate? Facebook says it uses a "global network of fact-checkers," and will manually limit the appearance of information its team determines is false. In addition, it says it is removing information that has been flagged as false by outside health organizations.

But do we really want tech companies as arbiters of the truth? And, if so, which version of the truth?

While preventing people from thinking that drinking bleach is a cure for anything is a valid reason for stopping fake news, the company isn't exactly consistent in its application of a standard meant to protect people from real-world harm. Facebook already says it won't limit political ads, even when they are known to be demonstrably false. 

All of this shows that big tech companies haven't come close to figuring out the best way to handle this. Both approaches have problems. In the case of Google and Twitter, fake information is still available, though the hope it that its impact is limited by the presence of verified search results. 

In Facebook's case, it's certainly reasonable to think removing fake information is the right course of action. Except, every time you take action to limit people's ability to share information, it's a step closer to a slippery slope toward censorship. That's especially true when you intervene in some cases and not others.

As technology changes the way we communicate and get news, and as platforms like Facebook and Google become the primary sources of information for many of us, they can help find new ways to limit the spread of dangerous diseases. If only we can figure out how to keep misinformation from becoming its own viral outbreak. 

Published on: Feb 3, 2020
The opinions expressed here by Inc.com columnists are their own, not those of Inc.com.