The CEOs of Twitter, Facebook, and Google will appear before the Senate Commerce Committee today to talk about how they moderate content on their respective platforms. Supposedly, the premise is that lawmakers are exploring whether they should change aspects of the law known as Section 230, which provides certain protections to platforms that host user-generated content.
There have been plenty of calls lately to change that law, most recently with the head of the FCC, Ajit Pai, saying that the Commission will reinterpret the law to mean the opposite of what it actually says. Previously, Senator Josh Hawley had introduced legislation to strip tech platforms of the protection provided by the law.
Section 230 of the Communications Decency Act basically gives platforms, like Facebook, the ability to moderate (or not moderate) content on its platform without legal liability. There are a few exceptions, most notably for copyright infringement and criminal activity.
The Electronic Frontier Foundation (EFF) calls it "one of the most valuable tools for protecting freedom of expression and innovation on the Internet." Without Section 230, social media networks like Facebook, Twitter, and YouTube simply wouldn't exist. They could be held liable for the content their users create and post, and--considering the amount of content posted every day--it simply wouldn't be feasible, or profitable, to manage.
The irony is that calls for changing Section 230 come from both sides of the argument. One side essentially wants these platforms to moderate more content. The other wants them to moderate less. Both seem to think that changing the law will give them more control over the online content they either agree or disagree with. Spoiler alert: It won't.
And, so, the Senate Commerce Committee has asked the three CEOs to appear and answer questions. Whether we actually learn anything depends on a few complex variables.
For example, one of the biggest issues is that the social media giants built platforms that are designed to amplify content in a way that increases engagement. YouTube's algorithms are meant to keep you watching more and more videos. Facebook's are designed to show you content that you like, which keeps you on the platform longer, which means it can show you more ads.
Except, in many cases that means amplifying controversial, fake, or otherwise incendiary posts. That's a very real problem that the platforms have done a very poor job of dealing with. To counteract that amplification, the companies have instituted policies to moderate some of what they believe are the worst types of content.
Twitter adds labels to what it calls "misleading posts." Facebook is banning new political ads in the week leading up to the election, and even afterward. YouTube demonetizes channels that violate its policies.
Just last week, Twitter attempted to tamp down the sharing of a controversial New York Post article about Hunter Biden by preventing users from posting the link, before later changing course to allow it. We'll set aside, for a moment, how ineffective that actually is at stopping the problem. There are far too many easy ways around that type of move.
The real issue is, regardless of which side of the political spectrum you're on, do we really want social media companies deciding that an article from a mainstream publication shouldn't be shared? Again, try to forget whether you agree with this particular article, or how you feel the New York Post. Next time it could easily be something you care about from a publication you support.
In every case, content that some people think they should be allowed to see amplified ends up being moderated or even removed. The thing is, you can't make a law to force private companies to do what you want them to do with the content on their platforms. The First Amendment is still a thing.
Because, ultimately, the question is who decides what types of content should be moderated. Obviously, the platforms can decide as they'd like, but it's almost impossible--in the polarized world we live in--to do it in a way that doesn't make almost everyone angry.
That simple fact is why you're likely to see a lot of angry senators on both sides of the aisle telling Mark Zuckerberg, Jack Dorsey, and Sundar Pichai why their companies no longer deserve protection under Section 230. Never mind that removing that protection will have the exact opposite effect than they intend. In that case, the platforms will surely restrict far more content than they already do.
In the end, the thing we're most likely to learn is just how bad lawmakers are at fixing very real problems when the cameras are pointed in their direction.