Facebook's AI Scan Of Your Posts For Suicide Prevention Can't Be Disabled
Facebook has rolled out a new technology called “proactive detection” artificial intelligence (AI) that would scan all of a user’s posts seeking patterns that suggest suicidal thoughts.
Also, if found necessary, it would share helpful information to either the user or his/her friends, or it may get in touch with local first-responders.
Responding to a question by website TechCrunch, a Facebook spokesperson said that users cannot opt out of the feature.
The spokesperson further said that the new feature is meant to ensure better user safety and the support resources that the social media behemoth is offering could be easily dismissed if the user doesn’t wish to check them.
Since an AI is expected to sense suicidal patterns faster, Facebook, through its deployment, is hoping to bring down the time taken to provide help to those at risk.
This isn't the first time that Facebook has used AI in such a manner.
The company had earlier tested the technology to detect problematic posts and for reporting suicidal thoughts to users’ friends; but that testing was limited only to the US.
But now, the social media giant would deploy the AI to scan content from across the globe, except in the European Union. The General Data Protection Regulation privacy laws in the EU make use of such technology rather complicated in the EU.
Another use for which Facebook would use the AI is to prioritize urgent or highly risky user reports so moderators could address them quickly.
Tools that would help instantly surface first-responder contact information and local language resources are also part of the strategy.
Facebook will also dedicate more number of human moderators to help prevent suicide. The company will train them so that they could deal with such cases round the clock, seven days a week.
In order to provide resources to users at risk and their friends etc., Facebook also has 80 local partners including the National Suicide Prevention Lifeline, Save.org and Forefront.
While all this sounds neat, on the flipside, there is the fear that such uninhibited scanning of content could lead to a dystopian surveillance scenario. What makes this fear more tangible is the fact that Facebook is yet to provide any solid answers on how they intend not to scan for content that may suggest petty crime or political dissent.
To such concerns, Alex Stamos, Facebook’s chief security officer, responded in a tweet thus:
CEO Mark Zuckerberg, not surprisingly, was all in praise of the product, without any mention of its possible downsides.
In a post update Monday, he wrote: “In the future, AI will be able to understand more of the subtle nuances of language, and will be able to identify different issues beyond suicide as well, including quickly spotting more kinds of bullying and hate.”
To spot suicidal thought patterns in posts, the AI was trained by Facebook using earlier posts that were reported manually for suicide risk. The AI also searches for comments such as “Do you need help?” and “Are you OK?”
Currently, the AI is used only in the US. But it is said to soon be expanded to other parts of the world. Exact dates for the same are not available yet.
© Copyright IBTimes 2024. All rights reserved.