Technology could stem the tide of non-consensual images, also known as revenge porn.

These explicit photos and videos, taken without the consent of the subject matter, are typically distributed on social media sites like Facebook, but artificial intelligence--and specifically a technique called pattern matching--can easily block the proliferation before it ever starts. Here's how it all works and why it helps with the problem.

Starting this month, if a user uploads a non-consensual image--say, a photo taken in a public shower at the beach--to Facebook, and then another user marks it as revenge porn (or calls the Facebook revenge porn hotline), any future sharing will be blocked.

"Not only does the user see a fairly bold prompt but also any sharing of the content is blocked," says Lav Varshney, a University of Illinois professor, an A.I. expert, and a member of the Signal Processing Society at the IEEE. "Unfortunately, it does not mitigate the technical issue over other communication channels, such as texting. The bold prompt, however, will draw the user's attention and perhaps change their behavior so other communication systems are not used."

The technology behind pattern matching, says Varshney, is similar to how your own brain works in storing images in short- and long-term memory. There's a marker that helps you identify when you've seen an image or a video, and at least for humans, it's fairly reliable. That's where that sense of having seen a movie or photo before comes from, and it's almost as if there's an imprint on your eyes and your brain that won't fade away.

Fortunately, computers can be trained to do the same thing using an algorithm. Google already uses pattern-matching technology to identify the same basic colors, angles, and shape of a person in a photo, such as President Trump. Apple also uses the technique in the Photos app on an iPhone and the iPad. In fact, you've probably used pattern-matching tech before. In Facebook, machine learning can detect the face of someone in a photo to help you with tagging, which also creates an automated share.

Of course, the technology can't identify revenge porn for that first image, which is often a source of embarrassment. That's where computer technology is not quite intelligent enough--knowing when a photo is not just explicit but is also meant to harm someone or was taken without a person's consent. For that to happen, the pattern matching would have to be able to understand the context and location of the photo.

As an example, if the A.I. were smarter, it would first identify someone in a photo and then determine if the person distributing the photo knows the person in it or has an ax to grind. Or, the A.I. would know the photo was sent between consenting adults initially and then uploaded elsewhere. There are privacy issues with "teaching" an A.I. about that context, so for now it has to rely on users to flag images.

The good news? Facebook is leading the charge on this problem. Seeing a revenge notice on one social network might make a user think twice about distributing it further. It's definitely a step in the right direction to eliminating the problem.