Fake News: Stanford Student Aims To Identify False Sites With Neural Networks
In the wake of the 2016 election, marred by questions of the influence of fake news, the biggest presences on the web have been tasked with figuring out ways to stop the spread of false information. While those companies are at work on their own solutions, Stanford student Karan Singhal believes he has a better answer.
The 19-year-old computer science major is taking as much of the human element out of the process of fake news detection as possible with Fake News Detector AI —a website and Google Chrome extension designed to sniff out fake news sites.
To accomplish the task, Singhal is employing neural networks—a sort of artificial brain that can process a number of factors at a time, weighing each element and producing a verdict on the validity of a particular website.
Instead of tasking people with fact checking individual claims or trying sort between bias and outright falsehoods, the Fake News Detector AI goes under the hood of a website in question and examines its parts, which can be more revealing than the text on the screen.
Singhal told IBTimes his algorithmic method sorts over site layout, popularity, writing style, the frequency of telling keywords like "liberal" and "conservative," among other aspects of a given site. It checks its analysis to a list of known fake and real news websites, and if a website shares the telltale signs of a fake, it’s marked as such.
The outcomes have been surprisingly accurate. Singhal said it produced 99.7 percent match in predictions when tested against the blacklist of B.S. Detector, another popular fake news identifier crafted by design technologist and digital activist Daniel Sieradski—one that happened to be briefly banned by Facebook.
“More importantly, it works against sites not on that blacklist as well,” Singhal said, meaning the neural networks were able to identify other fake news sites based on its prediction model for the known fakes.
Fake News Detector takes a different approach than Facebook, which promised to implement some machine learning systems to spot fake content but has also opted to tap human sources for fact checking purposes.
Last week, the social network introduced its plan to roll out subscriptions to the news feed, a feature that is part of the company’s larger Journalism Project initiative to establish stronger ties with news organizations and journalists. The service promises public service announcements to help increase reader literacy and combat the spread of fake news.
Prior to that, Facebook signed up signatories of Poynter’s International Fact Checking Code of Principles —a list that includes ABC News, FactCheck.org, Snopes, Politifact, the Associated Press, and the Washington Post in the U.S.—to fact check individual stories and help the service flag potentially false ones. The move was ridiculed by conservatives, as they believe the fact checkers in question are too liberal.
Facebook already made an effort to take people out of the process of sharing popular news stories following a report that claimed the company employed editors who flexed their political bias when curating stories for its trending news feature.
At the time, Facebook denied any partisanship within its process, though it introduced political bias training for employees and reached out to republicans who felt as though the social network was censoring their preferred news sources.
After overhauling its trending curation process and handing the reins over to artificial intelligence—a move the company would later be derided for by a former employee —Facebook’s algorithm started surfacing fake news stories and made some odd decisions as to what was truly newsworthy.
More and more people continue to get news from their social media feeds—62 percent, according to a 2016 study from the Pew Research Center —and are being exposed to the fake content that proliferates on those platforms. Worse yet, they’re believing it; an Ipsos Public Affairs survey found 75 percent of American adults believed fake news headlines they recognized to be accurate.
An approach more like Singhal’s would avoid calls of bias, which is important for establishing public trust for the fact checking. “In-house human fact-checking cannot be the solution to Facebook's fake news problem,” Singhal said.
He noted that “human fact-checking should certainly perform better than an algorithm, but fake news sites are popping up all of the time, and it's insurmountably time-consuming and costly to check every site shared on Facebook.” His neural network-powered review would potentially be able to catch fake sites as the pop up based solely on elements shared by the fakes that came before it.
The system itself isn’t foolproof of course, and when errors do occur, it presents a particularly interesting conundrum for its creator: tweak the algorithm manually, putting more of a human thumb on the scale in the process, or bypass the prediction and place a site the blacklist, risking the possibility of compromising the neutrality of the source material.
Singhal admits, “it is not clear what to do next” in one of those cases. “This problem is a daily concern for computer scientists building models like this one, and it has no clear solution. Probably the best that we can do is iteratively improve the model by adding more known sites periodically,” he explained.
As the neural network learns more and its predictions become more accurate, Singhal reasons that it could work beyond just the site-level and get more specific. “The algorithm could learn to look at different factors, such as the reliability of certain journalists, to accomplish article-level checking,” Singhal said.
The drill down could be particularly helpful on a website like Breitbart, which regularly straddles the line of fake and real in Singhal’s algorithm. If a given author is known for producing reliable and accurate reporting, they may be given the green light even if the site as a whole has a bad reputation.
The Fake News Detector AI isn’t quite up to the task of checking individual articles yet, and the algorithm isn’t calibrated to do so. For Singhal, the most important thing is to get it right—something that no other alternative has quite succeed at yet.
However, the prospect of taking people out of the fact checking business remains unlikely. Singhal said his algorithm cannot fact check, and said the very concept is a “dubious proposition even in theory.” So for those hoping to send Snopes and company packing in favor of machines, it appears the idea is still the stuff of sci-fi.
Singhal’s Fake News Detector AI is available on the web, where users can drop any news site into the search bar and see how trustworthy it is. A Google Chrome extension is also available to download, which will put a button on the Chrome browser bar that will tell users if a site is real or fake.
© Copyright IBTimes 2024. All rights reserved.