Meta Resumes Facial Recognition Tech To Combat Celeb-bait Scams
Meta Platforms Inc. announced Monday it has resumed trial of facial recognition technology to combat scams that exploit images of celebrities, a tactic known as "celeb-bait ads."
The company will now enroll nearly 50,000 public figures to begin a global trial in December. However, this trial will exclude jurisdictions where the company lacks regulatory clearance, including Britain, the European Union, South Korea, and the U.S. states of Texas and Illinois.
According to the company blog, it will start testing a new method to identify celeb-bait scams by using facial recognition to compare faces in ads to public figures' profile pictures on Facebook and Instagram. If a match is confirmed and the ad is found to be a scam, it will be blocked.
Any facial data used for this one-time comparison will be deleted immediately and not used for any other purpose, Meta stated.
Meta highlighted that initial testing with a select group of celebrities and public figures has demonstrated encouraging results in enhancing the speed and accuracy of detecting and taking action against scams that exploit their images, such as celeb-bait ads.
In the coming weeks, Meta, the owner of Facebook and Instagram, plans to expand this effort by sending in-app notifications to a larger group of public figures who have been targeted by celeb-bait scams. These notifications will inform them that they are being enrolled in the protection system.
Public figures, who are enrolled in this protection, can opt out at any time through their Accounts Center if they choose not to participate.
Scammers frequently exploit the images of well-known public figures to trick users into engaging with fraudulent advertisements. These ads often lead to scam websites that ask users to provide personal information or send money. Meta's ad review system has been using an automated technology, including machine learning, to detect such violations including scams.
"The idea here is: roll out as much protection as we can for them. They can opt out of it if they want to, but we want to be able to make this protection available to them and easy for them," Monika Bickert, Meta's vice president of content policy, told reporters.
Meta disabled its facial recognition system in 2021, deleting face scan data of one billion users, citing "growing societal concerns."
In August of this year, the company was made liable of collecting biometric data without proper consent and ordered to pay the state of Texas $1.4 billion to settle the lawsuit.
Additionally, Meta is facing other lawsuits alleging that it hasn't done enough to prevent scams that misuse celebrity images.
The tool being tested underwent Meta's "robust privacy and risk review process" internally, and was also discussed with regulators, policymakers, and privacy experts, Bickert said.
Meta is also testing video selfies as a way for users on Facebook and Instagram to verify their identity and regain access to accounts that have been locked.
The user will upload a video selfie and Meta will use facial recognition technology to compare the selfie to the profile pictures on the account they're trying to access.
The video selfies will never be visible on user profile, to friends or to other people on Facebook or Instagram and any facial data generated after this comparison will be immediately deleted regardless of whether there's a match or not.
"Video selfie verification expands on the options for people to regain account access, only takes a minute to complete and is the easiest way for people to verify their identity. While we know hackers will keep trying to exploit account recovery tools, this verification method will ultimately be harder for hackers to abuse than traditional document-based identity verification," Meta stated.
© Copyright IBTimes 2024. All rights reserved.