As election season ramps up, social media networks are under increased scrutiny over the prevalence of misleading, and often outright fake information posted on their platforms. Twitter appears to be testing a plan that would involve using journalists and fact-checkers to mark posts by public figures that are deemed "harmfully misleading."

That's from a report from NBC, which received a leaked demo of the new plan. Those efforts include a possible points-based system that would reward users who "provide critical context to help people understand [the] information they see." The points system is designed to help reduce the possibility that bad actors would try to manipulate the reporting process.

 inline image

The plan also includes marking false tweets with a large orange box labeled "harmfully misleading." Users would also be able to report tweets, and add context or clarify bad information. By highlighting reports from verified--and presumably neutral sources--it seems the company is counting on that information to help people make better decisions. 

Twitter confirmed to NBC that the leaked information represents a "possible iteration of a new policy" that the company plans to roll out March 5th. A spokesperson for Twitter said the company is "exploring a number of ways to address misinformation and provide more context for tweets on Twitter."

And it's not just candidates and their surrogates battling it out and spreading what can only be charitably called political spin. The recent Coronavirus outbreak has led to an onslaught of bad information. In that regard, Twitter and Facebook don't have the luxury of not doing anything. Fake news is a real problem, and it requires a real solution.

The problem is that the solution should be different based on the type of bad information. For example, identifying misleading information about a global health emergency is relatively objective. It's also really important since bad information can lead people to make bad decisions. (No you should never drink bleach.)

While I'm willing to give the company credit for recognizing its fake news problem. Every company should be willing to look at its own flaws and attempt to fix them. However, I'm not optimistic that this is going to help--especially for elections.

First of all, let's pretend that it's possible for unbiased, neutral observers to objectively evaluate information and identify the bad stuff. I'm not convinced that's possible since many journalists are seen as having a viewpoint, but for the sake of argument, we'll pretend it is.

The problem, then, is this: In a highly polarized environment (like elections) people don't even agree on what's true. It's not just that people share false information, it's that their followers and supporters believe that it's true. Marking it as "harmfully misleading" isn't likely to be a persuasive argument when people often can't even agree on an objective standard of reality.

And for those people, there are no neutral sources of verified information. If you mark something my person says as false--you're not neutral and you're not being helpful--you're attacking what I believe. In that sense, Twitter doesn't have a fake news problem, it has an ignorance problem. If we don't solve that, nothing else is likely to make much of a difference.