Apple’s CSAM Detection System and Its Messaging Are a Work in Progress

Apple has been talking and talking and talking about how its upcoming system for protecting children will also leave our privacy intact

Lance Ulanoff
OneZero
Published in
4 min readAug 13, 2021

--

Photo by Laurenz Heymann on Unsplash

What a week it’s been for Apple. It’s trying to do something important: protect children from abuse. But in doing so, revealing its plans well in advance of launching the tools and technology, the normally unflappable Cupertino tech giant opened a Pandora’s Box of questions and concerns.

Since then, it’s been on an information-sharing offensive, speaking to media about the intention and specific tech underpinnings of its CSAM Detection technology, and even having high-level executives sit down with media for on-the-record one-on-ones to eradicate misinformation and calm everyone down.

It may or may not have worked.

For me, it’s been a learning experience, as I try to understand the intricacies of not one, but two systems that will identify harmful or illicit images — one set being shared on the iPhone’s Messaging system, and the other being uploaded to Apple’s iCloud Photos. The technology and response between the two systems are different, but by introducing the concepts together, Apple, in its own view, created confusion. As Apple’s Senior Vice President of Software Engineering Craig Federighi explained to The Wall Street Journal this week,

“By releasing them at the same time, people technically connected them and got very scared: What’s happening with my messages? The answer is…nothing is happening with your messages.”

Leaving aside the Communication Safety part, which flags these images as part of iOS’s Screen Time tools and never sends the images or even a notification to Apple (just to the parents of an under-13 child in the iCloud Family Share system), the meatier subject, and one I’m still learning more about each day, is Apple’s CSAM (Child Sexual Abuse Material) Detection.

Earlier this week, I wrote about the parameters for detection and reporting. But I was missing key information — what is the threshold for flagging this content and for starting the process of evaluation and potential reporting to authorities?

--

--

Lance Ulanoff
OneZero

Tech expert, journalist, social media commentator, amateur cartoonist and robotics fan.