Facebook has just unveiled a new set of rules and actions intended to make it harder for outside governments or others to influence American elections. They may not completely prevent such foreign interference--it may be impossible to completely shut it down. But these new rules are a big step in the right direction.

Here's what Facebook has just announced:

1. Shutting down four misinformation networks.

Facebook claims it's shut down more than 50 networks of inauthentic accounts, pages, and groups on Facebook and Instagram. These accounts and groups, Facebook says, were working in coordination to spread misinformation, often just ahead of elections around the world. On Monday, it announced that it had just shut down four more such networks, three of them originating in Iran, and one in Russia.

Coordinated spreading of false information--much of it designed simply to deepen divisions between Americans rather than promote any particular viewpoint or candidate--bedeviled the 2016 U.S. election and others as well. So anything Facebook can do to root out and stop these groups, even if it can't catch them all, is bound to be helpful.

2. Labeling fake news.

Ever since the 2016 election, Facebook has been using independent third-party fact-checkers certified through the International Fact-Checking Network at the Poynter Institute to fact-check some news stories and the company says it has been showing news items that the fact-checkers find partly or completely false lower down in its news feeds. 

With this week's announcement, the company is making more use of these fact-checker findings, prominently labeling posts that the fact-checkers find partly or completely false. Instagram users who share news that's been found inaccurate will now get a pop-up warning them about it: "Independent fact-checkers say this post includes false information. Your post will include a notice saying it's false. Are you sure you want to share?" A similar warning is apparently already in place on Facebook.

"News" that is surprising, heartwarming, or likely to induce outrage tends to spread very quickly on Facebook whether it's true or not. For example, a supposed news story about Pope Francis endorsing Donald Trump was shared more than a million times, even though in fact Francis is not at all a fan of Trump. The sad fact is Facebook users very rarely check whether something is true or not before sharing it. So it's a good thing certified fact-checkers are doing it for them.

The only drawback I can see is that some Facebook and Instagram users may assume that any news story that doesn't carry that warning label is definitely true. That won't be the case because the fact-checkers won't be able to check every story.

3. Making it clearer where news is coming from.

Next month, Facebook will begin labeling news outlets that are partly or completely controlled by national governments, such as Russia Today. The labels will be included on these groups pages and in Facebook's ad library.

And Facebook is working to make pages more transparent. It has begun adding transparency information to pages, showing which country the page is controlled from and whether it has merged with other pages, how long it's been in place and whether it's changed its name. The company now plans to add more information, such as the page's legal owner and verified city, phone number, or website. Initially, it will provide this information for pages with large numbers of U.S. fans that have gone through Facebook's verification process. Starting in January, pages that run ads about politics or social issues will be required to show their confirmed page owners. (There is a separate authentication process now required before running these types of ads.)

Of course, ads are only the tip of the iceberg when it comes to manipulating Facebook users. Organic content such as news posts are widely used for this purpose. Still, anything that makes it a little harder for foreign nationals to manipulate U.S. users is a good thing. 

4. Making it harder to discourage people from voting.

Facebook says it had cracked down on misinformation about where or how to vote, beginning with the 2018 midterm election. It has also cracked down on voter intimidation and content or ads that suggest that people should be barred from voting based on their ethnicity, religion, nation of origin, etc. 

Ahead of the 2020 election, it says it will also ban advertising that suggests voting is useless, or encouraging users not to vote. This is a good idea because ahead of the 2016 election, we now know Russian operatives posted ads and content on Facebook posing as African-Americans and suggesting that African-Americans should not vote in that election.

5. Making it clearer where political ads and statements come from.

Facebook is launching something called Facebook Protect which is designed to prevent hacking into the accounts of elected officials, candidates, and the people who work for them. Such users can enroll their accounts in Facebook Protect, and will be required to turn on two-factor authentication. Once they do, Facebook will monitor their accounts for hacking and for sign-ins from odd locations.

This is a great idea, but I'm wondering what Facebook plans to do about miscreants creating fake accounts that appear to belong to a real person or candidate. It's relatively easy to create a new Facebook account using a newly-created email address and then name that new account after an existing person. It's certainly much easier--and more likely to go undetected--than hacking into an existing account. This is already a widespread problem for ordinary Facebook users and it's likely to be one for candidates as well.

Beginning with this election cycle, Facebook is also providing a spend tracker for presidential candidates that shows how much they spend advertising on Facebook (and Instagram and Facebook Messenger). Knowing how much a candidate spent on Facebook ads may not change your mind about supporting that candidate. Still, it's good to know.