In January, Mark Zuckerberg set a big task for himself. Rather than improve himself by learning Mandarin or slaughtering his own meat, as in past years, Zuckerberg undertook to improve his Frankenstein's monster of a product.

"Facebook has a lot of work to do -- whether it's protecting our community from abuse and hate, defending against interference by nation states, or making sure that time spent on Facebook is time well spent," he wrote on his page. "My personal challenge for 2018 is to focus on fixing these important issues."

We've still got eight weeks to go, but it feels safe to predict that Facebook will not enter 2019 with its issues fixed, or even much ameliorated. At best, you could say they've been addressed, with mixed results. While Facebook is no longer in denial about the need to police viral disinformation and hate speech, its actions on those fronts still feel ad hoc and reactive, lacking any coherent framework. Despite the qualified success of its midterms war room in preventing hoaxes from significantly affecting another election, hatred, fear, sensationalism, and extremism are still the salt/fat/acid/heat of Facebook, and the social internet as a whole.

"The world feels anxious and divided," Zuckerberg wrote in January. It feels no less anxious or divided now, on Facebook or off it.

That this remains the case is curious in light of one thing Zuckerberg's supporters often say about him: that his absolute voting control of Facebook, paired with the natural authority that comes with having built it from the ground up, enables him to make bold, unpopular moves no one else could get away with.

That's probably true. Which only makes it that much more conspicuous that Zuckerberg hasn't made any such moves.

Sure, he has committed to spending some fraction of Facebook's profits hiring security engineers and content moderators, and to tweaking the News Feed algorithm to discourage inflammatory clickbait--even at the expense of user engagement. But can anyone imagine a professional CEO-for-hire not making those same concessions after the two years Facebook's had?

If Zuckerberg wanted to use his unique position to pursue a step-function reduction in Facebook's harmfulness, as opposed to the timid incrementalism he's practiced so far, he could ban political advertising altogether, as the company has considered doing. He could offer users the option to opt out of data collection and targeted advertising, in exchange for a subscription fee. He could eliminate engagement as a target metric altogether, along with the various feedback mechanisms that encourage viral sharing. (Twitter is contemplating a version of this, with possibilities including scrapping the "like" button and follower counts, both of which shape user behavior as much as they quantify it.)

One interesting suggestion comes from the VC (and former Technology Review editor)  Jason Pontin, who argues the big internet platforms could embrace the notion of "discursive intolerance"--a sort of zero-tolerance policy for speech whose only function is to hijack or corrupt public discourse. To safeguard the integrity of the marketplace of ideas, the argument runs, requires quarantining, rather than debating, malignant conspiracy theories and other fringe opinions that are impervious to rational argument.

Of course, adopting a standard of discursive intolerance would put Facebook directly in the crosshairs of legislators who are already keen to find any evidence of political bias in its decision making. It would require the company to wade neck-deep into a morass it's desperate to avoid: becoming a so-called "arbiter of truth," as COO Sheryl Sandberg put it. Like all the potential experiments suggested above, it would carry serious risks for the company--financial, technical, regulatory, and competitive.

But isn't the entire point of Zuckerberg's one-man rule that he can take on those risks if he wants to? 

One of Zuckerberg's more prominent critics over the past two years has been Salesforce co-CEO Marc Benioff, who has said, among other things, that Facebook and other social-media platforms should be regulated, as we do with other addictive, bad-for-you products, like tobacco.

Not content with antagonizing Zuckerberg, Benioff has also taken on a slew of his fellow tech billionaires with his successful backing of Proposition C, which will tax San Francisco's biggest businesses and use the proceeds to fund affordable housing and other services for the city's homeless. To one local blogger, Benioff using his millions to cancel out the likes of Square's Jack Dorsey and Stripe's Patrick Collison conjured "Godzilla trading blows with other titans"--a battle waged high above the skyline, with the tiny, terrified citizens far below powerless to affect the outcome.

Like the Mothra-fighting Godzilla of later films, Benioff doesn't need to be doing what he's doing. While addressing homelessness would have benefits for businesses in San Francisco, Benioff has made it clear this is basically a moral crusade, one that will cost Salesforce itself an estimated $10 million a year in additional taxes. Doesn't Benioff have a fiduciary duty to avoid excess taxes? He doesn't think so, and--so far, at least--Salesforce's shareholders aren't pressing the issue.

"Unaccountable" is a word that gets thrown around a lot in Silicon Valley, where CEOs like Zuckerberg, Alphabet's Larry Page and Snap's Evan Spiegel craft their share structures and boards of directors to make themselves impossible to fire. But a lack of accountability isn't automatically a bad thing. As Zuckerberg's fans believe, and as Benioff has shown, extreme job security can free up CEOs to make the difficult but correct choice.

Failing to use that power isn't unaccountable. It's unconscionable.  

Published on: Nov 8, 2018