Facebook patent hints at an automated solution for fake news
Facebook may have said that it's stepping up its fight against fake news in the past few weeks, but there are signs that it might have had a way to tackle this problem sooner. A recently published USPTO filing from 2015 reveals that Facebook has applied for a patent on technology that would automate the process of removing "objectionable content." It's ostensibly for eliminating hate speech, porn and other material that Facebook has objected to for years, but the system could easily be applied to bogus stories as well.
The approach would supplement user-based content flagging with machine learning. The automatic system would generate a score for content based on the likelihood that it's objectionable, helping human moderators decide which material to cut. It'd look at the number of users objecting to content, for example, as well as the age of the account making a complaint (to discourage harassment and trolling). The AI-like code would study valid flags and learn to make more informed decisions about objectionable content.
This is just a patent application, and there's no guarantee that Facebook will either secure the patent or use it on its social network. A spokeswoman tells The Verge that the company regularly applies for patents it doesn't use, and that this content removal plan shouldn't be interpreted as a clue to its strategy.
However, the patent's existence shows that Facebook has been thinking about improved ways of pulling content for a while, and that the issue is really just coming to a head following the US election. Why hasn't it implemented this technology, though? There are a few reasons why it might have been hesitant: it won't help if someone actually believes the fake news, if their Facebook habits make it unlikely to show up, or if they just ignore it. It might also have been reluctant to do anything that would fuel accusations of bias. Nonetheless, the patent application doesn't help Facebook much -- it implies that the company simply chose not to implement a technical solution for fake news, even if there were perfectly valid reasons for holding off.