Menu

Blog

Nov 29, 2019

This is how Facebook’s AI looks for bad stuff

Posted by in categories: robotics/AI, terrorism

The context: The vast majority of Facebook’s moderation is now done automatically by the company’s machine-learning systems, reducing the amount of harrowing content its moderators have to review. In its latest community standards enforcement report, published earlier this month, the company claimed that 98% of terrorist videos and photos are removed before anyone has the chance to see them, let alone report them.

So, what are we seeing here?: The company has been training its machine-learning systems to identify and label objects in videos—from the mundane, such as vases or people—to the dangerous, such as guns or knives. Facebook’s AI uses two main approaches to look for dangerous content. One is to employ neural networks that look for features and behaviors of known objects and label them with varying percentages of confidence (as we can see in the video, above.)

Training in progress: These neural networks are trained on a combination of pre-labelled videos from its human reviewers, reports from users, and soon, from videos taken by London’s Metropolitan Police. The neural nets are able to use this information to guess what the entire scene might be showing, and whether it contains any behavior or images that should be flagged. It gave more details on how its systems work at a press briefing this week.

Comments are closed.