Online Hate: YouTube Hides Supremacist Videos But Won't Take Down
YouTube announced Tuesday its progress in its combat against extremist content and more changes to its platform. In the announcement, the company said it won’t take down videos that contain controversial religious or supremacist content, but will instead limit them.
YouTube said it will apply a “tougher treatment” to content that is not illegal but has been flagged by users as potential violations to the platform’s policies on hate speech and violent extremism. The company said the videos will be “placed in a limited state,” which means the content will be behind an interstitial. The videos will not be recommended or monetized, won’t allow likes or comments and will not show suggested videos.
Read: What's Coming To Amazon This Fall: Prime Services, Echo Dot, Alexa Gadgets
YouTube will start applying those changes in the coming weeks for its desktop version and will bring it to mobile versions afterwards.
Extremist content on YouTube has caused the platform tons of trouble.
Earlier this year, Google received strong backlash from the U.K. after it was revealed British government ads were appearing next to extremist videos and of former KKK leader David Duke. Ads for the Royal Navy, the Royal Air Force, Transport For London and blood donation campaigns were placed in videos by white nationalists, such as Duke. The adverts were also found in clips involving Steven Anderson, a pastor who celebrated the killing of 49 people at the Pulse nightclub in Orlando last year, and the Polish Defence League, a nationalist organization which posts videos against Muslims. Ads for the BBC, Channel 4 and the Guardian were also found on extremist content.
YouTube’s Fight Against Extremism
In June, YouTube announced four steps it was taking to fight terrorist content on its site, which included the detection and removal of video through human review and machine learning. Using machine learning YouTube removed 75 percent of violent extremist content before it even received a single human flag over the past month, the platform said Tuesday in a post.
Read: YouTube Will Redirect Terrorist-Related Searches To Playlist Debunking Extremism
“While these tools aren’t perfect, and aren’t right for every setting, in many cases our systems have proven more accurate than humans at flagging videos that need to be removed,” YouTube said.
YouTube, which sees 400 hours of content uploaded on its platform per minute, said machine learning has doubled both the amount of videos that have been removed and the rate at which it has been taken down. The company said it was “encouraged by these improvements, and will continue to develop our technology in order to make even more progress.”
YouTube also said it has started working with more NGO’s and institutions, including the Anti-Defamation League, the No Hate Speech Movement, and the Institute for Strategic Dialogue, to work on issues like hate speech, radicalization, and terrorism. The company said it’s working with the organizations to better identify video on the platform that is being used to radicalize and recruit extremists.
YouTube added there is more to come regarding its fight against online extremism and that there “is always more work to be done.”
Recently, YouTube launched a new feature called the Redirect Method, which was developed by Alphabet Inc. subsidiary Jigsaw. The feature redirects users that search for videos related to extremist content to playlists that counter and debunk terrorist ideologies.
Twitter and Facebook’s Fight Against Extremist Content
Other tech companies, including Facebook and Twitter are taking steps to combat violent and extremist content. In June, Facebook COO Sheryl Sandberg touted the platform’s plan to increase its monitoring and removal of extremist content. Sandberg said Facebook is planning to hire 3,000 more human terrorism experts, which will make bring the total employee number to 7,500. The workers will monitor “videos in real time to find any content that may be inappropriate and get it off faster” using artificial intelligence, she said.
However, a recent report by the Guardian revealed Facebook content moderators that work to identify terrorist activity on the platform had their identities compromised. Moderators found something had gone wrong when they began receiving friend requests from individuals associated with terrorist groups they were monitoring.
Meanwhile, Twitter said in its transparency report it suspended more than 636,000 accounts between Aug. 1, 2015 and Dec. 31, 2016 over extremist content. This year, Twitter announced numerous moves to curb online abuse and hate content on its platform.
© Copyright IBTimes 2024. All rights reserved.