Apple CSAM detection system branded "dangerous" by academics


Two academics who built a similar system have labelled Apple’s recently announced system for detecting child sexual abuse imagery (CSAM) as dangerous. The news arrives only a few weeks after the news broke that Apple will scan devices for CSAM content, working alongside the National Center for Missing and Exploited Children. 

This new report arrives courtesy of The Washington Post, which provided two decorated academics with a platform to discuss their worries over the technology. Jonathan Mayer is an assistant professor of computer science and public affairs at Princeton and Anunay Kulshrestha is a graduate researcher at Princeton University for Information Technology Policy. 

Dangerous Technology 

Kulshrestha and Mayer spoke about their experience writing a peer-reviewed publication. It focused on how to build such a system, which is why Apple’s intentions to use such technology has both of them worried. “Our research project began two years ago, as an experimental system to identify CSAM in end-to-end-encrypted online services. As security researchers, we know the value of end-to-end encryption, which protects data from third-party access.”

“But we’re also horrified that CSAM is proliferating on encrypted platforms. And we worry online services are reluctant to use encryption without additional tools to combat CSAM. We sought to explore a possible middle ground, where online services could identify harmful content while otherwise preserving end-to-end encryption,” added Kulshrestha and Mayer. 

Kulshrestha and Mayer believe the system could be repurposed for surveillance and censorship because the system is unrestricted. Apple or any other platform-holder could repurpose the technology without alerting the end user. That, of course, massively infringes on privacy rights, which is a basic human right.

Read more: Musk reveals Tesla Bot humanoid Android designed to do “boring tasks” for you

Real-world examples

As both academics rightly point out, there are already examples of governments using such technology for political reasons. WeChat in China uses content matching technology to scan for dissident materials. There’s plenty of evidence that suggests governments and other bodies of power would use such technology in a negative way. 

Kulshrestha and Mayer said: “We spotted other shortcomings. The content-matching process could have false positives, and malicious users could game the system to subject innocent users to scrutiny.”

“We were so disturbed that we took a step we hadn’t seen before in computer science literature. We warned against our own system design, urging further research on how to mitigate the serious downsides. We’d planned to discuss paths forward at an academic conference this month,” added the academics. 

Read more: Bruce Willis starred in an advert using Deepfake so he didn’t have to do anything

This Article's Topics

Explore new topics and discover content that's right for you!

News