Mark Zuckerberg has gotten a lot of things right.

Over the years, he has shown a knack for identifying companies that represent nascent threats to Facebook and either buying them early (Instagram, WhatsApp) or competing them into the dirt (Snapchat). His hiring of Sheryl Sandberg is often cited as an example of how a single personnel move can transform a company's fortunes. His focused leadership made Facebook an exception to the rule that technology companies rarely fare well in platform shifts, ensuring it became even more dominant in mobile than it had been on the web.

When it comes to knowing what will be good for Facebook, in other words, Zuckerberg is probably as close to infallible as a CEO could hope for. It's only when he tries to think about what's good for the rest of the world that his judgment fails him.

It failed him in 2010 when he pledged $100 million to fix the public schools in Newark, New Jersey, without getting anything like the results he wanted. It failed him two years ago when he tried to sell India on a "Web Basics" plan under which mobile phone providers would have offered free access to a small number of websites and services, including Facebook. And it failed him, spectacularly, when he allowed Facebook to be used by the Russian government to interfere in the 2016 election.

A common thread to these failures is that Zuckerberg, ever the optimist and idealist, expects others to recognize and share his good intentions, and underestimates the strength of entrenched dysfunction, factionalism and perverse motives.

He expected teachers to rally to his call to reform education rather than worrying about their job security. He thought Indians would welcome the arrival of Internet access rather than reject it as "digital colonialism." And he thought giving as many people as possible a "voice" via Facebook would make for a more informed, more tolerant public--rather than give the forces of disinformation and intolerance a powerful new weapon.

Last week brought the news that Facebook sold at least $150,000 worth of advertising to a Russian "troll farm" known for doing the Kremlin's dirty work on the Web. What it didn't bring was any sign that Zuckerberg is truly reckoning with his company's culpability for what has already happened--in particular, its role as a vehicle for fake news and bots--and its responsibilities going forward. It shouldn't take a Congressional subpoena for the public to find out what sorts of ads the thousands of sham Russian accounts bought, or how those ads were targeted. But it might.

After the election, Zuckerberg made a big show of soul-searching, publishing a 6,000-word manifesto in which he grappled with how Facebook might be contributing to increasing political polarization and what it should do now. His answer: supporting "meaningful communities" that foster empathy and understanding.

But the problem of foreign entities injecting clandestine propaganda into American elections doesn't require such a squishy remedy. It just requires that existing federal disclosure requirements that govern political advertising in television, radio and print be extended to social media. They haven't been yet, in part because Facebook successfully lobbied against it in 2011.

Only the most extreme cynic would suspect that Zuckerberg had any inkling the loophole his company fought to preserve would someday be used, illegally, by a hostile regime seeking to destabilize the U.S. A more charitable interpretation is that Zuckerberg, like almost everyone else, just didn't see it coming.

And that's important to keep in mind when contemplating all the other areas where Zuckerberg's foresight might not be as acute as his stellar business resume implies. For starters, there's his position on artificial intelligence. Zuckerberg believes it will be a wonderful thing for humanity, and he's annoyed as hell that Elon Musk is whipping up fears of AI doomsday scenarios.

"Technology can always be used for good and bad, and you need to be careful about how you build it, and what you build, and how it's going to be used," he said on a Facebook Live stream. "But people are arguing for slowing down the process of building A.I.--I just find that really questionable. I have a hard time wrapping my head around that."

Judging from his other notable failures, that last part, at least, is accurate: Zuckberberg has a hard time wrapping his head around how people will use AI to do harm. After all, he couldn't even wrap his head around how people would misuse the things he understands best: Facebook's pages and advertising tools.

Saying that new technologies will pose new threats and opportunities for abuse isn't scaremongering. It's realism. You don't contain those threats with happy talk and manifestoes about meaning and connection. You do it with commonsense measures like the campaign advertising regulations Facebook lobbied to weaken.

Until Zuckerberg shows that he understands all this, no one who's not a Facebook stockholder should accept his judgment--about AI, or anything else.