Mark Zuckerberg live-streamed a speech on freedom of expression from Georgetown University today. Facebook has come under pressure from a variety of groups for the way it handles paid political ads and other types of content that many users consider divisive. For example, Facebook refused a request by former vice president Joe Biden's presidential campaign to remove what it considered a false and misleading ad.
At the same time, many groups have criticized Facebook for having what they perceive to be a bias against certain political perspectives. It's clear that Mark Zuckerberg is struggling with not only how to deal with content posted on his platform, but also with how to explain Facebook's philosophy and policy.
Today, he attempted to clarify those views, saying the goal is to "fight to uphold as wide of a definition of freedom of expression as possible and to not allow the definition of what is considered dangerous to expand beyond what is absolutely necessary."
I agree. As I wrote last week, Facebook is in an almost impossible position. Regardless of how it chooses to handle this type of content, once a company starts imposing limits it's no longer truly free speech.
"While I certainly worry about an erosion of truth, I don't think most people want to live in a world where you can only post things that tech companies judge to be 100 percent true," Zuckerberg said. He's right. I'm not sure it's a great idea to have tech companies deciding what's true, especially when they have a conflict of interest in that they make enormous amounts of money based on our engagement with that content, true, false, or something in between.
Facebook seems to think that the better option is to simply create transparency around who is sharing the content instead of creating stricter policies around the content itself.
"We've actually found that a different strategy works best, and that is focusing on the authenticity of the speaker, rather than trying to judge the content itself," Zuckerberg said. Put another way, Facebook's goal is to make sure you know exactly who is spouting off divisive or misleading information, rather than keeping those messages out of your feed.
In theory, that's somewhat helpful. If you're able to judge the credibility of the source, you can determine how to view their content. The problem is, that's not how it really works.
But before we get to that, let's acknowledge that Facebook isn't obligated to allow any particular type of content or views on its platform. It's a publicly owned company, not a government entity, so if it wants to, it can decide what it allows on its platform.
But, once you open the platform up for anyone to say almost anything (within reason), you have a responsibility to your community of users to do the right thing. Which leads to the problem: Most people don't pay any attention to the details.
It doesn't matter that misinformation might be spread by Russian troll bots. It doesn't matter that a random celebrity is sharing misleading data about vaccines. Most people don't care that they don't actually know what they're talking about, or that the person they're listening to has an agenda. You can verify identity all you want, but in a celebrity-driven culture where we value the reach of influencers over the knowledge of experts, authenticity has become more or less meaningless. This is especially so on social media, where the loudest, flashiest, most optimized voices get much of the attention.
I respect that Mark Zuckerberg is obviously thinking hard about this problem. I also give him credit for at least trying to clarify his--and by extension Facebook's--view of a very complex issue. However, telling the world: "We'll just work really hard on verifying that these people actually are who they say they are," avoids the more challenging problem of what to do about the content that's creating so much of the division, or if anything can (or should) be done at all.