Apple will reportedly scan photos stored on iPhones and iCloud for child abuse imagery

Apple plans to scan photos stored on iPhones and iCloud for child abuse imagery, according the Financial Times. The new system could help law enforcement in criminal investigations but may open the door to increased legal and government demands for user data.

The system, called neuralMatch, will “proactively alert a team of human reviewers if it believes illegal imagery is detected, who would then contact law enforcement if the material can be verified,” the Financial Times said. neuralMatch, which was trained using 200,000 images from the National Center for Missing & Exploited Children, will roll out first in the US. Photos will be hashed and compared with a database of known images of child sexual abuse.

The system will be used first in the US

“According to people briefed on the plans, every photo uploaded to iCloud in the US will be given a ‘safety voucher,’ saying whether it is suspect or not,” the Financial Times said. “Once a certain number of photos are marked as suspect, Apple will enable all the suspect photos to be decrypted and, if apparently illegal, passed on to the relevant authorities.”

John Hopkins University professor and cryptographer Matthew Green raised concerns about the system on Twitter Wednesday night. “This sort of tool can be a boon for finding child pornography in people’s phones,” Green said. “But imagine what it could do in the hands of an authoritarian government?”

“Even if you believe Apple won’t allow these tools to be misused [crossed fingers emoji] there’s still a lot to be concerned about,” he added. “These systems rely on a database of ‘problematic media hashes’ that you, as a consumer, can’t review.”

Apple already checks iCloud files against known child abuse imagery, like every other major cloud provider. But the system described here would go further, allowing central access to local storage. It would also be trivial to extend the system to crimes other than child abuse — a particular concern given Apple’s extensive business in China.

The company informed some US academics about it this week, and Apple may share more about the system “as soon as this week,” according to two security researchers who were briefed on Apple’s earlier meeting, the Financial Times reports.

Apple has previously touted the privacy protections built into its devices, and famously stood up to the FBI when the agency wanted Apple to build a backdoor into iOS to access an iPhone used by one of the shooters in the 2015 attack in San Bernardino. The company did not respond to a request for comment on the Financial Times report.

Hot this week

The Hidden Costs of Overengineering Security

Complex security systems often create more vulnerabilities than they prevent by overwhelming teams with noise and maintenance demands while missing actual threats.

The True Cost of Chasing Compliance Over Security

Compliance frameworks create a false sense of security while modern threats evolve beyond regulatory requirements. Learn how to build actual protection rather than just checking boxes.

The Hidden Risk of Over Reliance on AI Security Tools

Over reliance on AI security tools creates dangerous blind spots by weakening human analytical skills. True resilience comes from balancing technology with continuous team training and critical thinking.

The Quiet Dangers of Overlooking Basic Security Hygiene

Basic security hygiene prevents more breaches than advanced tools, yet most teams overlook fundamentals while chasing sophisticated threats.

Your Password Strategy Is Wrong and Making You Less Secure

The decades-old advice on password complexity is forcing users into insecure behaviors. Modern security requires a shift to passphrases, eliminating mandatory rotation, and embracing passwordless authentication.

Topics

The Hidden Costs of Overengineering Security

Complex security systems often create more vulnerabilities than they prevent by overwhelming teams with noise and maintenance demands while missing actual threats.

The True Cost of Chasing Compliance Over Security

Compliance frameworks create a false sense of security while modern threats evolve beyond regulatory requirements. Learn how to build actual protection rather than just checking boxes.

The Hidden Risk of Over Reliance on AI Security Tools

Over reliance on AI security tools creates dangerous blind spots by weakening human analytical skills. True resilience comes from balancing technology with continuous team training and critical thinking.

The Quiet Dangers of Overlooking Basic Security Hygiene

Basic security hygiene prevents more breaches than advanced tools, yet most teams overlook fundamentals while chasing sophisticated threats.

Your Password Strategy Is Wrong and Making You Less Secure

The decades-old advice on password complexity is forcing users into insecure behaviors. Modern security requires a shift to passphrases, eliminating mandatory rotation, and embracing passwordless authentication.

Why API Security Is Your Biggest Unseen Threat Right Now

APIs handle most web traffic but receive minimal security attention, creating massive unseen risks that traditional web security tools completely miss.

Security Teams Are Asking the Wrong Questions About AI

Banning AI tools is a failing strategy that creates shadow IT. Security teams must pivot to enabling safe usage through approved tools, clear guidelines, and employee training.

The Illusion of Secure by Default in Modern Cloud Services

Moving to the cloud does not automatically make you secure. Default configurations often create significant risks that organizations must actively address through proper tools and processes.
spot_img

Related Articles

Popular Categories