UC Berkeley Students Unveil Russian Facebook, Twitter Propaganda With Chrome Extension, Bot
Executives from Facebook, Twitter and Google testified before the U.S. Congress on Tuesday as legislators grapple with conclusive evidence that Russia used social media propaganda to influence the American presidential election in 2016. The Daily Beast reported members of the Trump administration themselves even promoted content from fake accounts run by “professional trolls” paid by the Russian government. According to Google, a Kremlin-linked group spent $4,700 on YouTube videos as part of a “misinformation campaign ” during the 2016 elections.
Politicians and tech industry leaders have vastly disparate opinions about how to curb digital propaganda. Meanwhile, two UC Berkeley students built two tools over the past few months to identify “fake news” and offer better information. Longtime friends Rohan Phadte and Ash Bhat call their extracurricular project RoBhat Labs. Bhat told International Business Times their new Chrome extension, botcheck.me, identified around 6,000 politicized Twitter bots within just an hour of launching the new tool today.
“We definitely think it [propaganda] had a pretty big impact,” Bhat told IBT. There are multiple species of political bots, including both average citizens and parody political accounts. For the latter, the Twitter profile may label the account a “parody” but unlabeled retweets are often mistaken for real political messages. Considering how influential Twitter accounts are for the Trump administration, it’s no wonder these fake accounts could sway opinions about real politicians. “Some of these propaganda accounts look like real political figures,” Bhat said. “A lot of people are responding to these retweets as if it was a real person.”
Another popular category of bot accounts is the stereotypical Trump supporter. For example, using patriotic emojis in the profile bio or header image, along with keywords like “deplorable” or hashtags such as #lockherup, #MAGA and #buildthewall. “We’re really surprised Twitter didn’t do more, considering it took us just two months to do this,” Bhat said. “They had over a year and so many resources.”
Looking forward, Bhat recommends platforms like Twitter identify and investigate suspicious accounts. Sure, every once in a while a real user will get falsely labeled as a bot. That’s why customer support would need to be an interactive aspect of future strategies. On the other hand, Bhat said some patterns are crystal clear.
Many of the bots RoBhat Labs identified so far were completely inactive for several years, then suddenly change their names and started posting hundreds of tweets a day. Bot accounts also had a tendency to promote posts from other bots or political parody accounts. “It’s completely conceivable how these bots could influence elections,” Bhat said.
Next, these students are looking to conduct a broader sweep of Twitter and calculate the overall prevalence of such bots. Wired reported some experts estimate bots could represent up to 50 percent of Twitter accounts. Researchers from the University of Southern California and Indiana University offered a more conservative estimate with a 2017 report showing bots made up around 15 percent of surveyed Twitter accounts.
Several months before RoBhat labs launched their Chrome extension for Twitter users, the curious duo created a Facebook Messenger bot to diagnose articles.
“One thing we wanted to do is keep our biases out of the algorithm. This is something we wanted to be really thoughtful about,” Bhat said. “The big problem with Facebook is they have these echo chambers... what they end up doing is giving you back content with the same political bias that you, yourself, have.”
He believes this type of repetitive affirmation makes people cling to their beliefs more fiercely than before, regardless of facts.
In order to avoid personal bias, Phadte and Bhat trained the algorithm with sites such as Breitbart representing right-wing media and the Bluedot Daily representing the liberal end of the spectrum. The analysis software picks up on factors such as the site’s history, including diversity of topics and sources, which can help weed out fake articles as opposed to biased journalism. The Facebook bot offers a political rating, revealing the article’s overall sentiment in clear terms, a summary and options for more sources.
The goal was to allow Facebook users to make educated choices about the variety of content they consume. Bhat believes clear labeling, combined with deliberately diversifying media choices, could help reduce the polarizing effect of political journalism. “We want to give more power to the users themselves,” Bhat said. “What I think would be incredibly helpful is to have more transparency as to the political leanings of the articles.”
© Copyright IBTimes 2024. All rights reserved.