Suicide Prevention: Facebook Is Using Artificial Intelligence To Identify Suicidal Users
In an effort to improve user safety on its platform, Facebook has started testing the use of artificial intelligence to identify users who may be showing signs of suicide, according to a report from the BBC.
The social network is making use of algorithms that are able to spot warning signs in user activity and interactions with friends in posts and comments that may indicate the person is feeling suicidal.
When the AI notes a person may be at risk, it alerts a team of humans at Facebook who are then tasked with reaching out to the users who may be at risk of self harm. Facebook offers the users resources where they can find help.
Facebook is using pattern recognition to spot people who may be in need of help. Mentions of sadness or pain, or receiving comments from friends that ask "Are you ok?" or tell the poster "I'm worried about you" are indicators the algorithm looks for.
The technology is currently only being tested on Facebook users in the United States, but it marks the first public use of artificial intelligence to read and analyze activity taking place on the network since Facebook CEO Mark Zuckerberg spoke on the possibility of unleashing AI on the platform last month.
In a letter from Zuckerberg published on Facebook earlier this month, the company founder noted his interest in improving Facebook as a community, including building tools to make Facebook more supportive and responsive.
"For some of these problems, the Facebook community is in a unique position to help prevent harm," he wrote. "When someone is thinking of suicide or hurting themselves, we've built infrastructure to give their friends and community tools that could save their life."
Prior to the implementation of artificial intelligence to identify suicidal users, Facebook left the task up to its community members. Users could—and continue to be able to—report when one of their friends appeared to be suicidal by reporting a post. Facebook would offer resources to those users, much like it is doing with those spotted by AI.
Those posts that have been flagged by other users have been used to fine-tune Facebook's algorithm in order to teach the AI what it needs to look for.
In the future, Facebook may begin to go beyond just offering resources to a user who is showing signs of self-harm. The platform may reach out to family members and friends to encourage them to contact the at-risk user—though such a feature presents considerable privacy concerns.
Suicide has become a topic particularly sensitive for Facebook, as the company's live streaming video service Facebook Live has hosted several incidents where users have killed themselves on camera—including a 14-year-old girl in Miami, Fla., who livestreamed her suicide in January.
Facebook announced that it would also approach suicidal activity on Facebook Live by partnering with mental health organizations and allowing vulnerable users to contact those services through the Facebook Messenger platform.
© Copyright IBTimes 2024. All rights reserved.