(This is a guest post from Applico Head of Platform Nick Johnson, my co-author on Modern Monopolies.)

Facebook and Twitter were both in the spotlight this week for all the wrong reasons.

Both companies have been dragged to Capital Hill recently regarding the activities of Russian "troll farms" and their impact on US politics. Whatever you think of the ongoing Russia saga, the experience has exposed some difficult truths for Facebook, Twitter and other platform companies.

For a long time, Facebook and Twitter have tried to retain a relatively neutral posture when it came to what content users posted on their sites. Facebook had provisions against things like nudity and harassment, but when it came to the veracity of content posted on its platform, its view was that the quality of content was up to users. If users engaged with content, it was good, and if they didn't, it wasn't and would get deprecated in the Newsfeed. Engagement was the ultimate arbiter of value.

Twitter, the more laissez faire of the two, has long had problems with harassment and abuse on its platform. It's struggled to balance its desire to remain neutral with respect to free speech.

The struggles of both of these companies these companies aren't unique. Every platform struggles with "bad actors," from counterfeits on Amazon and Alibaba to copyrighted content being posted on YouTube. The difference for Twitter and Facebook is that this time, the lines between "good and "bad" aren't black and white.

These two platforms are caught in their role as quasi-private, quasi-public entities. Yes, they're both private companies, but each platform's network also acts as a kind of public good. This dual role means that in some ways, Facebook and Twitter have to make decisions that look a lot like the other entity that deals with governing public goods - governments.

Governments have to make difficult, subjective decisions that will please some citizens and anger others. There isn't always a Pareto optimal outcome where everyone wins.

Platforms have to do this too. They have to decide what kinds of behavior are "bad" and to be discouraged and which kinds are "good" and to be encouraged.

Facebook has come face to face with this challenge with the issue of fake news. Facebook had to decide that fake news was indeed "bad" (something which CEO Mark Zuckerberg was loathe to admit). This is an explicit value judgement contrary to Facebook's previous stance that, roughly, whatever content users engaged with was "good."

Similarly, Twitter has had trouble squaring the idea that trolls and harassment are "bad" with its commitment to free speech as "good."

Whether or not these decisions are correct is not the issue here - I might agree they are because I share the same values. Someone else might disagree, and indeed, the proliferation of "alt-right" versions of many platforms suggests some do. But these decisions are, nonetheless, value judgements. And for tech companies that try to portray themselves as merely neutral technical facilitators for their users, this is a difficult concept to deal with. It's also in their interest not to.

As noted in my book Modern Monopolies, Facebook CEO Mark Zuckerberg often talked about the similarity between governments and a lot of the decisions Facebook had to make about how to govern its network. These decisions are, in effect, public policy decisions.

In my interview with Twitter's former Director of Platform Ryan Sarver, he similarly described his time as the head of Twitter's developer community as like being the "mayor of a town." Sarver said, "our job is to create incentives and disincentives to produce the best behavior, the best outcome, from a bunch of people you'll never meet. So our job was to create policy that lets people know where the guardrails are and what behaviors are expected out of them. I went into the job having never done any real policy work before and I never realized how important policy was going to be and it ended up being a huge part of our time."

Like any government, platform companies have to decide what they stand for. They have implicit or explicit values based on how they construct their platforms and networks. Any network, much like a society or country, has implicit values that determine who is included in or excluded from that community. There is no "objective" right answer. The right answer depends on the platform's values are and what it is trying to achieve.

However, it's no accident that as time went on, Zuckerberg seized to use this kind of language. The more the public perceives platforms like Facebook as quasi-governmental entities, the more likely Facebook is to be subject to government regulation.

The controversy over fake news and Russian trolls has, unfortunately for Facebook, brought this issue to the forefront. How Facebook and Twitter responds will have important implications for platform businesses in the future.

Published on: Sep 29, 2017