In Silicon Valley, unlike in the rest of the corporate world, the notion that companies have an ethical duty beyond maximizing shareholder value and obeying the law is not a controversial one. It's rare to find a big company started there in the last 20 years that doesn't see itself as "making the world a better place," directly or indirectly.
Uber is reducing drunk driving and carbon emissions. Airbnb is helping middle-class people pay their rent. Twitter is enabling peaceful democratic revolutions. Salesforce puts 1 percent of its equity and resources toward philanthropy. And so on.
Equally widespread, however, is the belief that doing right doesn't mean taking sides. The preferred stance is what you might call a "tools and rules" approach: Give people tools to empower them and create rules to prevent abuse of those tools. Beyond that, don't get involved. Because technology itself is a force for good, the best thing you can do is get your tools into as many hands as possible. Courting controversy and making enemies only gets in the way of that.
This essentially negative view of techno-morality was famously crystallized in Google's original motto, "Don't be evil." In a letter published on the eve of Google's IPO, the company explained the phrase to mean its search results were "unbiased and objective," not subject to influence by advertisers or the whims of Google's own engineers. In a modest way, the slogan captured the libertarian idea that a powerful company, like a nation-state, is most virtuous when it governs least -- even if that means tolerating hate speech.
"An important part of our values as a company is that we don't edit the search results," Google co-founder Sergey Brin said in 2008, explaining when asked why a query for "Jew" returned an anti-Semitic website. "What our algorithms produce, whether we like it or not, are the search results."
More recently, we heard echoes of that from Mark Zuckerberg. Defending Facebook against charges that its algorithm promoted fake news stories that may have influenced the outcome of the U.S. presidential election, Zuckerberg implied that the company can only go so far without compromising on its commitment to neutrality.
"We believe in giving people a voice, which means erring on the side of letting people share what they want whenever possible," he wrote in a post on his Facebook page. "We need to be careful not to discourage sharing of opinions or to mistakenly restrict accurate content. We do not want to be arbiters of truth ourselves, but instead rely on our community and trusted third parties."
Twitter CEO Jack Dorsey has shown an ounce more willingness to take sides in his attempts to combat racist and sexist harassment on his service, recently conducting a purge of accounts associated with the so-called alt-right movement -- a move critics described as sorely overdue.
But in restoring the posting privileges of alt-right leader Richard Spencer over the weekend, the company explained that Spencer's original sin wasn't spreading white nationalist views or fomenting hatred but creating duplicate accounts in violation of the Twitter rules.
To be sure, there's an appealing humility about the tools-and-rules approach. Who are we, Zuckerberg and Dorsey ask, to make the final determination of which speech is good and which bad? That's a job for "the community."
But that humility masks a deeper arrogance. Even if you accept Zuckerberg's proposition that Facebook is making the world "more open and connected" -- rather than recognize the evidence it's dividing us into ever more insulated and polarized digital tribes -- you still have to weigh that against the damage it has caused. By viewing everything through the distorting prism of "engagement," Facebook has undermined the economics of the news business and degraded the quality of information available to users in a way that directly fuels our "post-fact politics." If fake news or a micro-targeted voter suppression scheme played a meaningful part in Donald Trump's surprising electoral win, the butcher's bill could be considerably higher. But what's any of that when you believe your Facebook fortune will enable you to eradicate all disease?
Fused with that arrogance is a frightening naivety. The past few weeks and months have brought report after report that Russia's ruling regime has sought to introduce chaos and suspicion into America's democratic process, and possibly even influence the outcome of that process, by working through multiple channels, among them Facebook. According to BuzzFeed, U.S. intelligence officials agree with the assessment by an independent group called PropOrNot that Russia was behind some proportion of the false headlines that flourished on Facebook through the summer and fall. That's on top of the army of state-sponsored Russian trolls posing as Trump supporters on Facebook and other social-media platforms.
In Facebook, Zuckerberg has built what's arguably the first-ever technology allowing a foreign power to spread propaganda within the borders of the U.S. at mass scale without being detected. Yet, as my colleague Tess Townsend reported, Facebook takes no precautions to discourage that from happening or shed light on how often it happens. Asked what it does to monitor foreign propaganda or prevent the buying of illegal political ads, Facebook merely pointed out its terms of service, which require advertisers to comply with "all applicable laws and regulations."
And here we run into the limits of the tools-and-rules approach. "Letting people share whatever they want" sounds great, unless those "people" are a foreign government bent on weakening America. Or a murderous terrorism syndicate seeking to win new recruits. Or neo-Nazis trying to make people think the Holocaust never happened.
The problem with rules is that bad actors, almost by definition, don't care about them. In fact, bad actors like rules; as in a video game: the more clearly they're delineated, the easier they are to beat.
If you design your rules according to the proposition that, as Zuckerberg puts it, "people are good" and "believing in people leads to better results over the long term," they are almost sure to prove inadequate when those "people" are nation-states with bad intentions. If you leave the definition of "good" up to "the community," don't be surprised when the best-organized, best-funded, or angriest segments of "the community" decide what's good are beheading videos, concentration camp memes, and false-flag conspiracy theories. If Zuckerberg and Dorsey don't want to be "arbiters of truth," the Kremlin, Alex Jones, and ISIS are happy to take over that responsibility.
There's one argument for the neutral-platform approach that's hard to refute: From a business standpoint, the perception of neutrality is invaluable. After a thinly-sourced report accused the editors who curated headlines for Facebook's "Trending News" module of persistent liberal bias, the backlash was immediate. The company decided it was better to take its chances with algorithms that can't tell fact from fiction than risk an exodus of conservative users.
There are also relationships with lawmakers to consider. Zuckerberg is said by those who know him to fear regulation above all else. That fear, as much as the prospect of alienating users, is what undergirds his studied embrace of political neutrality.
Still, Zuckerberg has greater freedom of action than most CEOs thanks to a stock structure that lets him sell most of his shares of Facebook while maintaining voting control in near-perpetuity. Zuckerberg carefully negotiated a clause that would allow him to take a leave from Facebook to serve in public office or government, so presumably he has some sort of political beliefs -- if not outright political ambition -- as well as some core ideas about what's good for society and what's not. Facebook may reach one-quarter of the world's population and have the revenues of a small country, but it's still, after all, a private-sector enterprise, not constrained by the First Amendment in any decisions it makes about speech limits.
Standing for something more concrete than the emptiness of words like "open" and "connected" would be costly for a company like Facebook. For Zuckerberg to promulgate an affirmative notion of goodness in his role as CEO as avidly as he does in his philanthropy would, in all likelihood, result in a diminution of Facebook's size and power.
But until he and other Silicon Valley CEOs show more willingness to take moral responsibility for that power, it's hard to regard that as a bad thing.