Since you worked on an actual contract catching these sorts of people you are perhaps in a unique position to answer the question: will this sort of blanket surveillance technique in general but also in iOS specifically - actually work to help catch them?
I have direct knowledge of examples of where individuals were arrested and convicted of sharing CP online and they were identified because a previous employer I worked for used PhotoDNA analysis on all user uploaded images. So yeah, this type of thing can catch bad people. I’m still not convinced Apple doing this is a good thing, especially on private media content without a warrant, even though the technology can help catch criminals.
now im afraid, i have two young children < 5 years old.
i have occasionally took pictures of them naked with some bumps on the skin or mosquito bite and sent them to my wife over whatsapp to look at and decide do we need to send them to doctor, do i have to fear now that i will be marked as distributing CP.
It’s not just you. I have pictures of my kids playing in the bath. No genitals are in shot and it’s just kids innocently playing with bubbles. The photos aren’t even shared but they’d still get scanned by this tool.
This kind of thing isn’t even unusual either. I know my parents have pictures of myself and my siblings playing in the bath (obviously taken on film rather than digital photography) and I know friends have pictures of their kids too.
While the difference between innocent images and something explicit easy for a human to identify, I’m not sure I’d trust AI to understand that nuance.
That you even have to consider sexual interpretations of your BABY'S GENITALS is an affront to me. I have pictures of my baby completely naked, because it is, and I stress this, A BABY. They play naked all the time, it's completely normal.
Yeah that’s a fair point. The only reason I was careful was just in case those photos got leaked and taken out of context. Which is a bloody depressing thing to consider when innocently taking pictures of your own family :(
> no court is going to indict you because you have baby pictures on your phone
Maybe, maybe not. Bad luck is possible with anything involving police, prosecutors, judges, and juries. Need justification for that point of view? Just look at the number of people who were convicted and spent time in jail who truly were innocent. That doesn't even touch on the possible repercussions that can happen from just being questioned/arrested and later let go.
Don't immediately take affront, take the best possible interpretation of the parent comment. This is about automatic scanning of people's photo libraries in the context of searching for child pornography, presumably through some kind of ML. It seems to me that the concern of the commenter is that if there are photos of their child's genitals that they'll be questioned about creating child pornography, not that they're squeamish about photographing their child's genitals. This happened in 1995 in the UK: https://www.independent.co.uk/news/julia-somerville-defends-...
> While the difference between innocent images and something explicit easy for a human to identify, I’m not sure I’d trust AI to understand that nuance.
In this case it’s not AI that’s understanding the nuance, it’s authorities that identify the exact pictures they want to track and then this tool lets them identify what phones/accounts have that photo (or presumably took it). If ‘AI’ is used here it is to detect if one photo contains all/part of another photo, rather than to determine if the photo is abusive or not.
Although there is a legitimate slippery slope argument to be had here.
Is there some way of verifying that the fingerprints in this database will never match sensitive documents on their way from a whistleblower to journalists, or anything else that isn't strictly illegal? How will this tech be repurposed over time once it's in place?
You seem to be suggesting that the AI will go directly from scanning your photos for incriminating fingerprints to reporting you to journalists.
I have to assume humans are involved at some point before journalists are notified. The false-positive will be cleared up and no reputations sullied (except perhaps the reputation of using AI to scan for digital fingerprints).
>The false-positive will be cleared up and no reputations sullied...
This is dangerously naive. The US justice system alone will hound people on goosed up charges and try to get people to accept a plea deal and write a bogus confession. Parallel construction. Additionally if you can't audit the database (I'd bet very few people can, including your senator) how do you know a hash of something not CP wasn't inserted into the database. This entire system screams ready for govt overreach. It's worse than normal since there'll be no public evidence when it's abused.
The other way around. If the database of fingerprints is unauditable, and especially if the database varies from country to country, then it would be very easy to add fingerprints for classified documents, or photos documenting known war crimes, or even just copyrighted stuff to close the so-called analog hole.
Documents could also be engineered to trigger false positives, making it difficult or impossible for a corporate whistleblower to photograph incriminating evidence to deliver to the authorities.
So, if the rumors are true and every iPhone will check every photo against an opaque database of perceptual fingerprints, what safeguards exist (beyond "trust us" from the database keepers) to prevent abuse of the feature to suppress evidence and control the flow of information, and which organizations or governments will have control over the contents of the database? As always, who watches the watchers?
> While the difference between innocent images and something explicit easy for a human to identify, I’m not sure I’d trust AI to understand that nuance.
I recall a story several years ago where someone was getting the film developed at a local drugstore, and the employee reported them for CP because of bath photos. This was definitely a thing before computers with normal every day humans.
I don't have knowledge of how Apple is using this, but based on what I know about how it's used at Google this would be flagging previously reviewed images. That wouldn't include your family photos, but are generally hash-type matches of images circulating online. The images would need to depict actual abuse of a child to be CSAM.
You would only be flagged if the photos (' hashes) were added as part of some investigation right? So you only have to fear for your criminal record in the event that an actual criminal gets ahold of your indecent (in their hands!) photographs. In which eventuality you might be (relatively to not, but still leaked) glad they'd been discovered and arrested etc. assuming your good name could be cleared.
Just playing devil's advocate, my gut (and I think even considered) reaction is in alignment with surely just about the whole tech industry: it's over-reach (if they're not public images).
Look at all the recent findings that have come to light regarding ShotSpotter law enforcement abuse [1] These systems, along with other image and object recognition projects are rife for false positives, bias, and garbage-in-garbage-out. They should in no way be considered trustworthy for criminal accusations let alone arrest.
As mentioned in the twitter thread, how does image hashing & recognition tools such as PhotoDNA handle adversarial attacks?[2][3]
Just as being banned from one social media platform for bad behavior pushes people to a different social media platform, this might very well push the exactly wrong sort of people from iOS to Android.
If Android then implements something similar, they have the option to simply run different software, as Android lets you run whatever you want so long as you sign the wavier.
"You're using Android?! What do you have to hide?"
-- Apple ad in 2030, possibly
I'm the person you're responding to, and I think so? My contract was on data that wasn't surveilled, it was willingly supplied in bad faith. Fake names, etc. And there was cause / outside evidence to look into it. I can't really go into more details than that, but it wasn't for an intelligence agency. It was for another party that wanted to hand something over to the police after they found out what was happening.
I see. I was responding to you, yes. And in this case I was more curious about your opinion - based on your previous knowledge - on the viability of Apple’s technology here, rather than the specific details of your work.
In my (uninformed) opinion - this looks like more of a bad faith move on Apples part that will maybe catch some bad actors but will be a net harmful result for apple’s users and society, as expressed in the Twitter thread.
Others who responded here though also seem to think it’ll be a viable technique.