Facebook will use artificial intelligence to halt terrorist propaganda online, the social network announced Thursday. The move comes after harsh criticism and pressure from politicians who say technology companies must take more responsibility in filtering online extremism.

Facebook said it is expanding its use of A.I. to recognize potential terrorist accounts and posts on its platform, adding that they can be deleted or blocked without human review. Earlier, Facebook and other tech sites relied on human intervention to remove upsetting content, even when algorithms flagged offensive material.

"Our A.I. can know when it can make a definitive choice, and when it can't make a definitive choice," said Brian Fishman, lead policy manager for counterterrorism at Facebook, according to The Wall Street Journal. "That's something new."

But after three terrorist attacks in the U.K. in four months, Prime Minister Theresa May demanded new international agreements to better regulate the internet and companies that spread content. In response to May, Facebook disclosed new software that's been in use for several months that combs the site for known terrorist imagery and stops it from being reposted, according to The Wall Street Journal. This would include things like beheading videos, but not violent videos like the Cleveland murder that was posted on the social network in April.

Published on: Jun 16, 2017