That analogy misses the asymmetry in claims and power.
Microsoft does not sell Windows as a sealed, uncompromisable appliance. It assumes a hostile environment, acknowledges malware exists, and provides users and third parties with inspection, detection, and remediation tools. Compromise is part of the model.
Apple’s model is the opposite. iOS is explicitly marketed as secure because it forbids inspection, sideloading, and user control. The promise is not “we reduce risk”, it’s “this class of risk is structurally eliminated”. That makes omissions meaningful.
So when a document titled Apple Platform Security avoids acknowledging Pegasus-class attacks at all, it isn’t comparable to Microsoft not listing every Windows virus. These are not hypothetical threats. They are documented, deployed, and explicitly designed to bypass the very mechanisms Apple presents as definitive.
If Apple believes this class of attack is no longer viable, that’s worth stating. If it remains viable, that also matters, because users have no independent way to assess compromise. A vague notification that Apple “suspects” something, with no tooling or verification path, is not equivalent to a transparent security model.
The issue is not that Apple failed to enumerate exploits. It’s that the platform’s credibility rests on an absolute security narrative, while quietly excluding the one threat model that contradicts it. In other words Apple's model is good old security by obscurity.
I am not sure if you missed my earlier comment, but it's directly applicable to this point you've repeatedly made:
>If Apple believes this class of attack is no longer viable, that’s worth stating.
To say it more directly this time: they do explicitly speak to this class of attack in the keynote that I linked you to in my previous comment. It's a very interesting talk and I encourage you to watch it:
On some random YouTube video that is mostly consisting of waffle and meaningless information like "95% of issues are architecturally prevented by SPTM". It's a quite neat and round number. Come on dude.
It’s not “a weakness.” It’s many weaknesses chained together to make an exploit. Apple patches these as they are found. NSO then tries to find new ones to make new exploits.
Apple lists the security fixes in every update they release, so if you want to know what they’ve fixed, just read those. Known weaknesses get fixed. Software like Pegasus operates either by using known vulnerabilities on unpatched OSes, or using secret ones on up to date OSes. When those secret ones get discovered, they’re fixed.
Microsoft does not sell Windows as a sealed, uncompromisable appliance. It assumes a hostile environment, acknowledges malware exists, and provides users and third parties with inspection, detection, and remediation tools. Compromise is part of the model.
Apple’s model is the opposite. iOS is explicitly marketed as secure because it forbids inspection, sideloading, and user control. The promise is not “we reduce risk”, it’s “this class of risk is structurally eliminated”. That makes omissions meaningful.
So when a document titled Apple Platform Security avoids acknowledging Pegasus-class attacks at all, it isn’t comparable to Microsoft not listing every Windows virus. These are not hypothetical threats. They are documented, deployed, and explicitly designed to bypass the very mechanisms Apple presents as definitive.
If Apple believes this class of attack is no longer viable, that’s worth stating. If it remains viable, that also matters, because users have no independent way to assess compromise. A vague notification that Apple “suspects” something, with no tooling or verification path, is not equivalent to a transparent security model.
The issue is not that Apple failed to enumerate exploits. It’s that the platform’s credibility rests on an absolute security narrative, while quietly excluding the one threat model that contradicts it. In other words Apple's model is good old security by obscurity.