Apple's Plan To Protect Children From Exploitation Has An Uncertain Future After Backlash
Apple’s plans to protect children from sexual exploitation on its platforms have run into a new wall following criticism that it went against its own privacy policy. The company announced in a statement that after listening to the complaints about its plan, it would put it on hold and return to the drawing board.
“Based on feedback from customers, advocacy groups, researchers and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features,” the company said.
Last month, Apple unveiled plans to begin checking users’ devices for illegal child sex abuse material (CSAM) in search of "digital fingerprints" in material stored inside iCloud accounts. Before an image is stored in Apple’s iCloud, Apple matches the image’s hash against a database of hashes provided by the National Center for Missing and Exploited Children (NCMEC).
If a violating file is detected, Apple then would move to decrypt it for review by a human monitor. If they conclude the system was justified in believing the image was proof of CSAM, the iCloud account would be locked and a report would be filed, possibly with law enforcement if necessary. Other companies, including Facebook, have used this method similarly for the same purpose.
Critics, however, rebuked the plan as being potentially too invasive and in contradiction of Apple's stances on user privacy. After first announcing its plans at the start of last month, Apple defended itself by saying its method was in fact more respectful of user privacy than previous attempts to go after CSAM. It made its process available for review to cryptography experts to prove that its system was strictly limited to images shared by NCMEC and not open to abuse in other contexts.
Apple’s balancing act with privacy and safety concerns has been a consistent challenge for the company. In 2016, the company came under pressure from the FBI, which demanded it break the encryption on an iPhone used by the perpetrator of the terrorist attack in San Bernadino, California, in December 2015. The FBI found a contractor who provided a workaround to access the device, but Apple was lauded by privacy advocates for its firm stance in a difficult situation.
However, Apple has been accused of violating user privacy by monitoring communications to share with brokers to produce targeted advertisements. A former Apple contractor named Thomas le Bonniec blew the whistle in 2019 by revealing how the company was sharing recorded conversations with third parties that often included highly personal material. Apple apologized, but le Bonniec warned that it did not face enough accountability for its action.
More recently, a federal judge allowed a class-action lawsuit to move forward that accused Apple of using its voice-activated assistant Siri to monitor users’ conversations without their knowledge.
© Copyright IBTimes 2024. All rights reserved.