It’s been a confusing several days since Apple first announced its intention to scan photos uploaded to iCloud for known images of Child Sexual Abuse Material (CSAM).
Privacy advocates have objected in strong terms to the move which would see scanning performed on the hardware itself, before being uploaded to iCloud. To confuse things further, Apple said in its FAQ [PDF] (opens in new tab) that this functionality would essentially be disabled if users chose not to use iCloud. The move, privacy campaigners fear, could lead to pressure from authoritarian governments for Apple to expand the functionality to help crack down on dissident activity.
- iOS 15 release date, beta, supported devices and all the new iPhone features
- Cloud storage vs cloud backup vs cloud sync: what's the difference?
- PLUS: Facebook Messenger gets end-to-end encryption
In a bid to take the sting out of the controversy, Apple has issued some clarifications. As Reuters (opens in new tab) reports, Apple now says that its scanner will only hunt for CSAM images flagged by clearinghouses in multiple countries, and that it would be simple for researchers to check that the image identifiers are universal across devices, to prove that it couldn’t be adapted to target individuals.
The company also added that it would take 30 matched CSAM images before the system prompts Apple for a human review, and any official report could be filed. This, in part, explains why Apple felt it could promise the chance of a false positive being less than one in a trillion per year.
Apple refused to say whether these were adjustments made in the face of criticism or specifics that were always in place, though it did add that as a policy still in development, change should be expected.
Nonetheless, privacy advocates believe they’re making a difference. “Even if they don't ultimately nix the plan, we're forcing them to do the work they should've done by consulting us all along,” tweeted (opens in new tab) Stanford University surveillance researcher Riana Pfefferkorn. “Keep pushing.”
Most recently, Apple VP of software engineering Craig Federighi told the Wall Street Journal (opens in new tab) that Apple’s new policies are “much more private than anything that's been done in this area before.”
“We, who consider ourselves absolutely leading on privacy, see what we are doing here as an advancement of the state of the art in privacy, as enabling a more private world,” he said. Adding that the system had been developed “in the most privacy-protecting way we can imagine and in the most auditable and verifiable way possible,” he painted the company’s solution as preferable to its cloud storage rivals, which look and analyze “every single photo.”
Federighi argued that critics don’t fully understand Apple’s implementation, and believes that the company is partly to blame for not explaining things clearly. Announcing CSAM scanning at the same time as its protections for minors using iMessage meant the two were erroneously conflated, he conceded.
"We wish that this would've come out a little more clearly for everyone because we feel very positive and strongly about what we're doing,” he said.
The “we” in that sentence may imply more uniform support within the company than is actually present. On Friday, Reuters (opens in new tab) revealed that the move had proved equally divisive within the company, with more than 800 messages about the plan appearing on the company’s internal Slack.
- More: Apple Child Safety photo scanning — how it works and why it's controversial