Facebook To Fight Terrorism With Artificial Intelligence, Policy Experts
Terrorism remains a problem that governments and companies everywhere are attempting to solve, but Facebook plans to launch several initiatives to contribute to these efforts. In a post Thursday, Facebook detailed several initiatives to help combat terrorism efforts on its network.
Among the initiatives is integrating artificial intelligence into Facebook’s existing content moderation tools. With machine learning, Facebook has emphasized improving its detection tools to determine when a user from a terrorist group is posting propaganda images or video automatically. In addition, Facebook also wants these tools to get better at detecting posts that could be from potential terrorists.
Read: Facebook Post In Thailand That Insulted Royals Gets Man 35 Years In Jail
Aside from individual posts, Facebook is also working on improving its detection of users who could be supporters of terrorism from their profile likes. For instance, if a profile liked pages and shared friend groups with users suspected of terrorism, Facebook could detect whether that account would be prone to involvement with terrorism. The social network also wants to improve its ability to detect duplicate accounts set up by a single user and plans to extend these tools to other Facebook properties like the international chat app WhatsApp and Instagram.
“We want to find terrorist content immediately, before people in our community have seen it,” Facebook said in a statement. “Already, the majority of accounts we remove for terrorism we find ourselves. But we know we can do better at using technology — and specifically artificial intelligence — to stop the spread of terrorist content on Facebook.”
Facebook also plans to bolster its in-person staff to work alongside these AI initiatives. The company now has a 150-person team dedicated exclusively to counterterrorism efforts, which includes academics, former law enforcement officials and engineers, along with a secondary team that can respond to major police requests.
The moves bolster previous Facebook content-monitoring efforts. Last month, Facebook announced it would hire an additional 3,000 moderators to help prevent violent or illegal live videos from being broadcast.
The company also plans to partner with several major bodies in these efforts, including governments and companies like Microsoft and Twitter, to coordinate data sharing and information.
“We want Facebook to be a hostile place for terrorists,” Facebook said in a statement. “The challenge for online communities is the same as it is for real world communities — to get better at spotting the early signals before it’s too late.”
Facebook has maintained a general presence in counterterrorism efforts, launching tools like its Safety Check feature, which allow users to post whether they are safe during attacks and other public threats. But the new initiatives come amid some criticism calling for the company to be more proactive. Following the Pulse nightclub shooting and Paris terrorist attacks last year, Facebook was among the companies sued by victims’ families for failing to prevent terrorist organizations from using their networks for recruiting and development.
© Copyright IBTimes 2024. All rights reserved.