After widespread criticism that “fake news” disseminated through the social media site may have directly affected the outcome of the election, Facebook has invested heavily in its capability to identify and interrupt the spread of misinformation.
But despite an obvious effort to address the problem of fake news, many challenges remain. As of February, Facebook had around 7,500 of its own content moderators (up 3,000 since May 2017) , and it has also outsourced the task to fact-checking media companies like Philippines-based startup Rappler. Facebook and Rappler employees alike have reported being under-resourced to effectively evaluate the large and growing volume of news articles—fake or otherwise—and some Rappler employees have even reported death and rape threats in response to their work .
To keep up with the growing volume of fake news without hiring fact-checkers at increasing rates, Facebook has sought to narrow the funnel of articles fact-checkers must review by experimenting with a number of AI and machine-learning approaches.
This year, Facebook acquired Bloomsbury AI, a London-based artificial intelligence startup specializing in “trawling” documents for patterns and relationships—a signal that it views AI and machine learning as a critical component in its fake news arsenal .
In the near term, however, the scope of machine-learning’s impact on the challenge of fake news has been and will continue to be limited by factors such as the immaturity of natural-language processing technologies and the breadth of contextual data required to evaluate credibility.
The language problem is exacerbated by the widespread use of irony—a universal sore thumb for machine learning technology—and even “adversarial” writing, whereby fake news authors actively obscure the intended message of their articles to avoid detection by algorithm. In the style guide for Nazi publication The Daily Stormer, founder Andrew Anglin writes, “The unindoctrinated should not be able to tell if we are joking or not” .
On the credibility side, a 2018 Georgia Tech Research Institute study concluded that “While modeling text content of articles is sufficient to identify bias, it is not capable of determining credibility.” Unlike bias, which was reliably estimated by the presence of certain key words, credibility assessment was “highly tailored to the specific conspiracy theories found in the training set” .
Given these limitations, it is not surprising that Facebook has continued to utilize human eyes by designing a platform that feeds suspicious stories—many of which were flagged by algorithms—to a dashboard viewed by human fact-checkers at third-party accredited publications and internally . Also unsurprising is that it tends to hire content reviewers not for their technical skills, but for their language expertise .
Based on the evaluations by fact-checkers, Facebook will display stories flagged as false lower on people’s newsfeeds, rather than removing them altogether .
With advances in natural language processing, it is likely that machine learning technologies can assume more of the burden of identifying fake news, but Facebook should take several more active steps to meet the challenge of a growing and evolving fake news universe.
Through initiatives like the Facebook Journalism Project and the News Integrity Initiative, Facebook has already begun collaboration with news organizations, technology companies, and academic institutions aimed at helping individuals make more informed decisions about news consumption and funding research, projects, and products related to news veracity . While this is valuable work, Facebook should aim to explicitly address the challenge of identifying credibility through article content by leading a coordinated effort to consolidate credible news data as training data for machine learning algorithms. Whereas previously, articles could be flagged for their similarity to established conspiracies, this data set might allow articles to be flagged for their deviation from established facts.
Outside of machine learning, Facebook has supplemented its team of trained fact-checkers by facilitating user’s ability to flag posts as suspicious. A limitation to this approach is the “echo-chamber” effect, whereby users are more frequently shown posts with messages with which they are likely to agree, and thus less likely to report. A solution could be to provide alternative newsfeed views through which users can read content that has not already been filtered to their preferences. While these user flags are not a substitute for professional fact-checking, like machine learning, they could help narrow the funnel of articles that fact-checkers must evaluate.
With so many advances in AI and machine learning still to come, it is difficult to predict how Facebook and other content hosts will fare in the battle against fake news. A concerning trend is the use of machine learning to create fake news content. How might Facebook and machine learning address this phenomenon?
Word Count: 793
1 Alexis C Madrigal, “Inside Facebook’s Fast-Growing Content-Moderation Effort,” The Atlantic, February 7, 2018, https://www.theatlantic.com/technology/archive/2018/02/what-facebook-told-insiders-about-how-it-moderates-posts/552632, accessed November 2018.
2 Alexandra Stevenson, “Soldiers in Facebook’s War on Fake News Are Feeling Overrun,” New York Times, October 9, 2018, https://www.nytimes.com/2018/10/09/business/facebook-philippines-rappler-fake-news.html, accessed November 2018.
3 Margi Murphy, “Facebook buys British artificial intelligence company Bloomsbury,” The Telegraph, July 2, 2018, https://www.telegraph.co.uk/technology/2018/07/02/facebook-buys-british-artificial-intelligence-company-bloomsbury/, accessed November 2018.
4 Andrew Marantz, “Inside the Daily Stormer’s Style Guide,” The New Yorker, January 15, 2018, https://www.newyorker.com/magazine/2018/01/15/inside-the-daily-stormers-style-guide, accessed November 2018.
5 James Fairbanks, Natalie Fitch, Nathan Knauf, and Erica Briscoe. 2018. Credibility
Assessment in the News: Do we need to read?. In Proceedings of
WSDM workshop on Misinformation and Misbehavior Mining on the Web
(MIS2). ACM, New York, NY, USA, 8 pages. https://doi.org/10.475/123_4
8 Mike Ananny, “Checking in with the Facebook fact-checking partnership,” Columbia Journalism Review, April 4, 2018, https://www.cjr.org/tow_center/facebook-fact-checking-partnerships.php, accessed November 2018.
9 Adam Mosseri, “Working to Stop Misinformation and False News,” Facebook for Media, April 7, 2017, https://www.cjr.org/tow_center/facebook-fact-checking-partnerships.php, accessed November 2018.