How Social Networks Balance Terrorism With Mission For Open Communication
When Pierce Owen heard about the Paris attacks about an hour after they began, he sent an iMessage to his parents in the States to let them know that he was safe inside his school in Belleville, Saint-Maur -- about 1.5 kilometers (nearly 1 mile) from the second attack. He then connected with friends via WhatsApp and afterwards posted a status on Facebook.
Social media, and his smartphone, played a key role in Owen being able to stay in close communication with his loved ones and provide relief to his family and friends. But these online tools are not limited to those who suffer -- the victims of terrorism. These same networks are available to and accessed by those of the Islamic State.
“There is no major social platform that is not dealing with an IS presence in some way,” said J.M. Berger, a senior fellow at the Brookings Institute. According to a March 2015 study, at least 46,000 Twitter accounts were connected to ISIS. There, Islamic groups spread propaganda, message with new recruits and can organize among themselves.
While Facebook CEO Mark Zuckerberg touts “connecting the world,” Twitter CEO Jack Dorsey praises open communication and YouTube is defined as a platform for “free expression,” the companies have taken stances against hate speech and terrorism. In 2015 all networks have spoken publicly and even clarified their policies to further address terrorism.
“There is no place for terrorists on Facebook. We work aggressively to ensure that we do not have terrorists or terror groups using the site, and we also remove any content that praises or supports terrorism,” a Facebook spokesperson told International Business Times.
Updated Terror Policies
Language on the policies for blocking terrorism and terrorists are written within each of the network’s standards. Earlier this year, both Facebook and Twitter updated the policies to further clarify their roles in addressing terrorism.
Facebook, in particular, has a section called “Dangerous Organizations” within their Community Standards. The page reads, “We don’t allow any organizations that are engaged in the following to have a presence on Facebook” and includes two bulleted points: “terrorist activity” and “organized criminal activity.”
“We also remove content that expresses support for groups that are involved in the violent, criminal, or hateful behavior mentioned above. Supporting or praising leaders of those same organizations, or condoning their violent activities, is not allowed,” the terms continued. This language on celebrating terrorist organizations was added in March.
Twitter’s terms are within a section called “Abusive behavior policy.” Under a section for “Violent threats (direct or indirect),” the site reads, “Users may not make threats of violence or promote violence, including threatening or promoting terrorism.”
Twitter added the second part on promoting terrorism in April. Three Congressmen -- Edward Royce and Eliot Engel of the Committee on Foreign Affairs and Ted Poe of the Committee on Terrorism, Nonpoliferation and Trade -- wrote to Twitter praising them for the addition in June. "We write to thank you for updating Twitter's user guidelines so that users of your platform will now be able to report accounts that promote terrorist activity," their letter said.
YouTube’s policy includes a section on “violent or graphic content” and one on “hateful content.” “It's not okay to post violent or gory content that's primarily intended to be shocking, sensational, or disrespectful,” the terms read.
The video-hosting site acknowledges that these terms can be confusing for those who wish to share videos, such as documentaries, that take a stance on issues. “This can be a delicate balancing act, but if the primary purpose is to attack a protected group, the content crosses the line,” the policy states.
Blocking The Terror
Even with these strict policies, the online networks need to juggle how to effectively oversee and monitor all the content that comes through. About 300 hours of videos are uploaded every minute on YouTube. There are 1.55 billion monthly active users on Facebook. Around live events and trending moments, Twitter use can spike to hundreds of thousands of messages per minute.
“Each network has taken its own approach to the problem,” Berger said. “Some networks have bigger problems than others, but there are different ways to deal with it, for instance, a focus on suspending accounts versus a focus on cooperating with law enforcement.”
The companies have in part come to rely on the networks they built to monitor the content. Facebook boasts the largest user-based monitoring force. “We have a community of more than 1.5 billion people who are very good at letting us know when something is not right. We make it easy for them to flag content for us, and they do,” a Facebook spokesperson said.
Facebook has a team of hundreds of people worldwide, stationed among four offices, that monitors reports. Content that is flagged as a safety concern, including terrorism-related posts, are prioritized for immediate review.
Twitter has aggressively suspended accounts that are also flagged by users. For instance, on April 2, Twitter suspended approximately 10,000 accounts “for tweeting violent threats,” a company representative told the New York Times. Twitter also complies with government information requests. Requests for user account information had increased by 52 percent over the first six months of 2015, Twitter reported in June. The largest bump was from the U.S. government, with requests up by 50.2 percent.
By adhering to their standards and at times seeking legal or government advice, networks decide when to block content. For instance, Google chose not to take down a video that showed terrorists killing a Paris policeman during the Charlie Hebdo shooting in January, the Guardian reported. The company’s headquarters had received a call from the team in France.
“As with other moment-of-death footage, we had to consider the dignity of the victim as well as the video’s news and commentary value. We decided to leave it up, and leave it up globally,” Google’s legal chief David Drummond told the Guardian.
Taking A Stance
The common narrative over the last year has addressed blocking, with reports on how many ISIS accounts were shut down by Twitter or when a video was taken off YouTube. Yet some experts -- and even the companies themselves -- say that blocking propaganda isn’t what will stop the Islamic State and terrorism.
“Social media alarmists often favor [highlighting movements such as] hashtag Social Media Blackout. The idea is that we’ll deprive the groups of these outlets, and it’ll stop them recruiting members. But I think in general the propaganda is largely self-defeating for the group,” said Max Abrahms, political science professor at Northeastern and member of the Council on Foreign Relations.
“Islamic State especially over the next several months is going to be attributed at a much faster rate than the recruitment rate,” Abrahms said.
The social networks and the communities they formed have taken stances against ISIS. In the early morning Saturday, just hours after the Paris attacks, Facebook released a tool that allows users to cover their profile picture with a Paris flag. CEO Mark Zuckerberg and Chief Operating Officer Sheryl Sandberg have both used it.
Twitter’s Dorsey shared his best wishes to France shortly following the disaster and has changed his Twitter photo to one of the “Peace for Paris” images. The picture for the official Twitter account is of the Paris flag.
For Google, company executives have spoken against censorship as the only answer. Instead, the team has promoted education. In June, Google’s Drummond called for marketers and members of the YouTube community to share videos that combat ISIS propaganda.
“Enforced silence is not the answer. Drowning out the harmful ideology with better messages, with reasonable messages, is the better way,” he said.
As he watched the news of Friday's attack unfold on Twitter from within his school in Paris, Owen and his mom both tweeted at the pastor from his church back home. They asked him "to start generating some prayers for the city among his 8,000 followers," Owen said.
© Copyright IBTimes 2024. All rights reserved.