How Social Media Users Can Spot Disinformation Around Russia And Ukraine
The spread of disinformation online is currently at an all-time high, with false narratives around Russia’s invasion of Ukraine continuing to infiltrate online conversations by inauthentic users. In fact, a disinformation monitoring platform, Cyabra, found that 56% of the Ukraine-related content created on social media at the beginning of the invasion had originated from bots and sock puppet accounts. And since the conflict erupted, the platform reported inauthentic profiles had also infiltrated online communities across European countries in an attempt to distort online conversations.
What’s more, whereas the spread of disinformation was previously limited to just a few social media platforms, we are now seeing the challenge of disinformation as a key issue for all platforms. In the last few days, Pinterest, a platform that has until now not been associated with misinformation campaigns, announced a new policy aimed at tackling climate change misinformation. And as technology progresses, so has the sophistication of these fake accounts and the campaigns they push online – making it much harder for users to identify what is right or wrong when scrolling through their feeds. The dangers of misinformation are only as effective as the willingness of everyday users to share this content, even if they do so unaware of the inaccurate material contained within. Now is the time for social media users to understand where disinformation comes from and how they can stay protected online.
How can social media users spot disinformation?
There are a few key differences social media users can identify between authentic and inauthentic profiles. When engaging with a social media account that you don’t know personally, it is helpful to check them against this list:
- Look at when a profile was created: Fake accounts tend to be more recently created. If the profile was created in recent weeks then it should be a red flag.
- Research who they are connected to online: Bots and sock puppet accounts will be closely linked to nefarious and other false accounts that are spreading disinformation.
- Pay attention to how they post online: Inauthentic profiles are highly active during all hours of the day and tend to practice re-sharing or re-tweeting existing posts from other fake accounts without publishing any original content of their own.
- Match their profile with content they post: Oftentimes, what is stated in their profiles does match what they are posting, e.g. a profile states they are a major basketball fan but never posts anything in regards to basketball.
What can users do to protect themselves from disinformation online?
After looking into the authenticity of a profile and determining it is indeed a fake account, users should immediately avoid engaging with that account and report to the affiliated platform. And while scrolling past a user that you think is spreading false information is a great way to halt the spread of a disinformation campaign, users can also take a proactive approach to protecting themselves online by limiting their privacy settings. Having this setting will make it more difficult for bots or sock puppet accounts to access users’ profile information and limit the amount of false information that will show up on a user’s timeline or feed.
With so much information being spread daily, the need to understand vulnerabilities online and how fake social media users are able to manipulate has become more important. Today, as global conflicts continue to form, it is important for all users on any type of social platform to stay vigilant on where and how they process information so that false narratives do not continue spreading.
(Dan Brahmy is the co-founder and CEO of Cyabra)
© Copyright IBTimes 2024. All rights reserved.