I take issue with most of the alarmism about this CSAM scanning. Not because I think our devices scanning our content is okay, but because of the implication that there's now a slippery slope that didn't exist before. For example, from the article:
> While today it has been purpose-built for CSAM, and it can be deactivated simply by shutting off iCloud Photo Library syncing, it still feels like a line has been crossed
Two simple facts:
(1) The system, as described today, isn't any more invasive then the existing CSAM scanning technologies that exist on all major cloud storage systems (arguably it's less invasive - no external systems look at your photos unless your phone flags enough of your photos as CSAM, which brings it to a manual review stage)
(2) Auto-updating root-level proprietary software can be updated to any level of invasion of privacy at any time for any reason the provider wishes. We aren't any closer to full-invasion-of-privacy with iPhone than we were before; it is and always has been one single update away. In fact, we don't know if it's already there on iPhone or any other proprietary system such as Windows, Chromebook, etc. Who knows what backdoors exist on these systems?
If you truly believe that you need full control and a system you fully trust, don't get a device that runs proprietary software. If you're okay with a device that isn't fully trustworthy, but appears to be benevolent, then iPhone isn't any worse than it was a month ago.
Until there's evidence otherwise, iPhone will continue to be as trustworthy as any other proprietary closed-source system. If you need more than that, please contribute to projects that aim to produce a modern, functional FOSS smartphone.
I agree with (2). But I do think there is a fundamental (human, psychological, emotional) difference between a tool running on a server and one running on my device.
The mental and ethical lines are clear when I hand content to a server, that's a contract I can understand and agree to. When it runs on my device, even if it's because of the option to send it to the server, that feels fuzzier. As Ben Thompson put it it's the difference between Capability and Policy.
This is now rubbing in our faces that the privacy is only in the policy, not the capabilities of the software. And the capabilities of propriety software are pretty knowable, through a combination of inspection and long term observation.
The end result isn't any different today, but it highlights the political position that Apple and others are being put in. I honestly don't fault Apple for this on anything except their clearly poor messaging, but they are getting pulled from all sides by governments who don't like the idea of them actually implementing a fully closed and end-to-end encrypted system for user data.
> But I do think there is a fundamental (human, psychological, emotional) difference between a tool running on a server and one running on my device.
Strongly agree. A law requiring police cameras in rented trucks is an invasion of privacy and a loss of rights for the populace, but it's a whole lot better than having those cameras in your home.
But the sad truth is that handheld devices are actually more like rented tools than they are private homes. Apple devices are windows into SaaSpace; their local storage and compute capabilities are merely implementation details. They aren't anything like the PCs of yore.
> But I do think there is a fundamental (human, psychological, emotional) difference between a tool running on a server and one running on my device.
I agree, but it's frustrating to me because it's an illusion: if your device is running proprietary software, you don't control the device. At that point there's little difference between your device and a server in the cloud, aside from technical characteristics like latency and compute power.
I think Apple is getting so much heat for this because they have at least maintained an outward appearance of separation between device and cloud for so long.
Some of this was just due to being further behind on the SaaS ramp than MS and Google. Everyone expects a Google-services enabled Android device to basically be a cloud thin client, but less so an iPhone. And Apple has actually stood up for encryption and not putting in openly exploitable backdoor capabilities just to appease law enforcement, see their back and forth with the FBI a couple of years ago.
Apple seems to be coming up with awkward solutions as they try to do the "right thing" in some places (more end to end encryption) while governments / law enforcement don't want hidden data because of "the children", terrorism, etc. They announced this proudly as though they had finally found the holy grail to negotiate this quagmire but the messaging did not go over at all how they seemed to expect.
I don't think there's a path forward for fully open and non-proprietary software that everyone uses. It hasn't happened on the desktop so I doubt it will on mobile devices. And most people want their devices to be cloud terminals, not sovereign system states.
So I think it comes back down to a legal and political issue of what requirements governments put on these tech companies, how data and privacy protections are guaranteed (or forced to be broken), and how that gets enforced.
Yeah, by even using the device at all, you already agreed to a very long EULA which includes provisions for whatever the vendor of the product wants to do. So, like you say, there's really no reason to have a shred of trust in proprietary software where you can never see for yourself what backdoors (etc.) might be there.
> The mental and ethical lines are clear when I hand content to a server, that's a contract I can understand and agree to.
This system will only work on photos you are going to upload to icloud photo library. If you dont enable this, no picture will be scanned.
In my view this is exactly the same as before where this scan was running on the server.
I'm also surprised that this is the thing that gets people up in arms about iPhones. I mean, you can't even write legitimate application for yourself and run it on your own phone without insane hoops to jump through. Not to mention that you can't buy an iPhone and decide to change the OS or whatever with the hardware, even if Apple stops supporting it.
I have no issue with people who give of very very basic user freedom because they want to opt into the iOS "ecosystem", but I don't see how this particular thing would push anyone of the ledge.
Simple, the line that was crossed is scanning your local files versus scanning cloud files (even though they promise this system is disabled if you disabled icloud photos.)
Once this system is in place, it takes only 1 tiny adjustment/exploit to scan other private stuff on your devices, which you would never upload to a cloud.
> it takes only 1 tiny adjustment/exploit to scan other private stuff on your devices
This has always been true, there's no "Once this system is in place" qualifier necessary. This is the reality of running someone else's proprietary code at the root level on your device.
I respectfully disagree :) that's black/white thinking, by that reasoning all software not inspected by yourself, and having some automated update system, can be considered unsafe.
The world is not just black and white...
> scanning your local files versus scanning cloud files
Correction, scanning cloud destined files, as a step of the upload pipeline. Local files not destined for the cloud are not scanned. It's a scan happening before the network stack of the pipeline, rather than after. But it still is already in the pipeline, an its way, at the point of scan (at least that's my understanding of their docs).
I agree with you, but you're painting the alarmists with a broad stroke. There's room for those who trust Apple with their data, are fine with the implications of server-side data storage, and are not fine with their devices performing the scan on offline data.
> There's room for those who trust Apple with their data, are fine with the implications of server-side data storage, and are not fine with their devices performing the scan on offline data.
Right, my point is that the latter (scanning of offline data) does not yet have any evidence of occurring, and we aren't "closer" to that being a reality than we were before: it is and always has been one software update push away from being a reality.
It's not so much that the slippery slope didn't exist before — Apple has just greased it.
> we aren't "closer" to that being a reality than we were before
I want to believe this but it feels very much like we are. And I don't think it's purely psychological or alarmist.
> it is and always has been one software update push away from being a reality
As a software engineer it's hard to shake the feeling that we're now just a patch release away from misuse, where as last year we were a major update away.
It's still only “one software update push away” either way if you're counting releases, but it's not if you're counting tickets in a product backlog. It's easier for governments to expand an existing feature than to pressure companies to build one from scratch. Apple already built the feature security services dreamed of even though it had no legal obligation to (it has to report CSAM it finds; it does not have to actively search for it on your device).
It's also easier to normalise the use of client-side scanning when one company has shipped it (without talking to the rest of the tech industry, and after declining invitations to talk to Cyber Policy/Internet Observatory teams at Stanford who are trying to help the industry as a whole with an improved collective approach). That pressure and the likely additional implementations of client-side scanning we'll see further expand the potential for abuse.
I really enjoyed the Stanford discussion available on YouTube[1]:
> @8:07
> …the other issue being that it wouldn't be terribly difficult to expand this client-side system to do all of the user's photos so we kind of have to trust the software to only apply this to things that are backed up to iCloud.
> @17:24
> I think it's inevitable that some government — quite possibly the U.S. — is going to want to expand this to counter terrorism and, well, I think for a lot of people that sounds reasonable. The problem is that the definitions of terrorism are somewhat malleable…
None of this feels alarmist to me. Yes, proprietary software is only ever an update away from throwing us all down the slope. But the slope is not a pure binary thing that is or isn't — companies can increase the gradient by decreasing the work it would take to hurt us, and that's what Apple has done here.
> As a software engineer it's hard to shake the feeling that we're now just a patch release away from misuse, where as last year we were a major update away.
I don't agree. Putting aside the "NeuralHash" algorithm that Apple made to try and match CSAM, scanning the files on a device is remarkably simple, and something Apple already does as part of your device's standard functioning (indexing files, attaching labels to pictures based on content detected by AI models, etc).
Apple could have already implemented a secret image tag for "terrorist material" or "drug material" that is attached automatically to images, hidden from the user, and phoned home / reported to FBI when a threshold is met. How would you know this system doesn't already exist? Literally all the components for this system were already in place.
It would be great to have information about how much work this took. The only thing I've seen mentioning timescale was the leaked Apple/NCMEC memo:
> Today marks the official public unveiling of Expanded Protections for Children, and I wanted to take a moment to thank each and every one of you for all of your hard work over the last few years.
So this doesn't sound like a feature where all the puzzle pieces were in place and they just needed NeuralHash.
Even if you casually “put aside” NeuralHash as you suggest, the amount of research and testing that has happened to ship the system they've described is not trivial.
I stand by the idea that this was never a point release away.
> Apple could have already implemented a secret image tag for terrorist material" or "drug material" that is attached automatically to images
Apple cannot even consistently tag cats and dogs. There is no way it was ready to ship a feature that tags drug or terrorist material in a way that generates few enough false positives that agencies won't just turn it off.
I do agree with you that we have no way to know what's running on closed-source devices (or even open-source ones, unless we personally audit the whole process from dust to device).
For me, though, “you can't ever really know what's running on your device so why care about contentious new things you've just learned will definitely be running on your device” is not compelling.
I might swallow 10 spiders a year in my sleep, but if someone offers to feed me one I think it's fair to decline.
Go ahead and take issue. You missed the part about them turning your own device against you and normalizing it as "oh it's just a little thing". This is huge and you shouldn't downplay it. Phones are at the center of our digital life and this is allowing a government proxy in to (first steps, likely to be followed by many more). I am quite glad that people are "overreacting" to it. It seems like you want us to just eat our grass like sheep. I will keep raising issues with it not matter how many times people say "alarmist" or "must be a pedo" or whatever is the insult of the day.
It isn't necessary to believe in a slippery slope to be very alarmed at this.
Their 1-in-a-trillion estimate of false positives is very likely total horseshit (there was a recent article on dissecting that claim from someone familiar with CSAM).
It also opens up a "swatting" avenue where if someone hacks your phone they can upload CSAM images and apple will sicthe authorities on you. Good luck with that.
Really we need to bring back comp.risks and start training ourselves to think a bit more cynically about the downsides of technology.
Agreed, and I'll add that the existing cloud systems have been running for a long time and have yet to slip into the various dystopian hypotheticals I've seen tossed around. Google isn't scanning Drive for documents containing anti-government sentiments. I'd be very interested in evidence that suggests otherwise.
> While today it has been purpose-built for CSAM, and it can be deactivated simply by shutting off iCloud Photo Library syncing, it still feels like a line has been crossed
Two simple facts:
(1) The system, as described today, isn't any more invasive then the existing CSAM scanning technologies that exist on all major cloud storage systems (arguably it's less invasive - no external systems look at your photos unless your phone flags enough of your photos as CSAM, which brings it to a manual review stage)
(2) Auto-updating root-level proprietary software can be updated to any level of invasion of privacy at any time for any reason the provider wishes. We aren't any closer to full-invasion-of-privacy with iPhone than we were before; it is and always has been one single update away. In fact, we don't know if it's already there on iPhone or any other proprietary system such as Windows, Chromebook, etc. Who knows what backdoors exist on these systems?
If you truly believe that you need full control and a system you fully trust, don't get a device that runs proprietary software. If you're okay with a device that isn't fully trustworthy, but appears to be benevolent, then iPhone isn't any worse than it was a month ago.
Until there's evidence otherwise, iPhone will continue to be as trustworthy as any other proprietary closed-source system. If you need more than that, please contribute to projects that aim to produce a modern, functional FOSS smartphone.