Private keys from secure elements leak all the time. There will be a flawed implementation that someone exploits, an insider will smuggle a key out etc.
This is why true zero-knowledge systems for this sort of thing aren't practical and will never be. Because a SINGLE leak will break it and there will be no way to even detect it.
The attestation systems you reference don't even allow true zero knowledge attestation, they involve a trusted intermediary to convert your burned-in private key to a temporary key which you use for attestation with a third party.
And the temporary key isn't even a product of a blind signature. And it's rate limited. So if a service selling these temporary keys shows up they will be able to easily trace it to the burned-in key responsible - then revoke it and if possible initiate legal action.
This also means that whenever you register to a service using one of these schemes you are registering with your real identity, it's only a question of how hard and how many parties need to collude to extract it.
And in the event that they really do blindly sign tokens generated on your device, then their scheme will not survive adoption. As it gets adopted, the value of these blind signatures will rise and services that sell them will pop up. There will be no way of tracing the sold blind signature to the compromised/colluding device and rate limiting will merely necessitate a farm of such devices as opposed to a single leaked key.
How feasible would it be to integrate a neural video codec into the SoC/GPU silicon?
There would be model size constraints and what quality they can achieve under those constraints.
Would be interesting if it didn't make sense to develop traditional video codecs anymore.
The current video<->latents networks (part of the generative AI model for video) don't optimize just for compression. And you probably wouldn't want variable size input in an actual video codec anyway.
You don't even need active measures. If a publisher is serious about tracing traitors there are algorithms for that (which are used by streamers to trace pirates). It's called "Traitor Tracing" in the literature. The idea is to embed watermarks following a specific pattern that would point to a traitor or even a coalition of traitors acting in concert.
It would be challenging to do with text, but is certainly doable with images - and articles contain those.
If they use paid accounts I would expect them to strip info automatically. An "obvious" way to do that is to diff the output from two separate accounts on separate hardware connecting from separate regions. Streaming services commonly employ per-session randomized stenographic watermarks to thwart such tactics. Thus we should expect major publishers to do so as well.
At which point we still lack a satisfactory answer to the question. Just how is archive.today reliably bypassing paywalls on short notice? If it's via paid accounts you would expect they would burn accounts at an unsustainable rate.
Performance like that may open the door to the strategy of brutefocing solutions to problems for which you have a verifier (problems such as decompilation).
There are apparently cases of psyllium leading to weird allergic reactions. IIRC this is frequently reported in caretakers who prepare psyllium for eldery patients, presumably because they end up inhaling some of the stuff?
Cryptographic operations when done correctly result in full chaos within the discrete domain (the so-called avalanche effect). Any bias of any kind gives rise to a distinguisher and the primitive is regarded as broken.
One way to imagine what symmetric cryptography does is a cellular automaton that is completely shuffled every iteration. In the case of Keccak/SHA3, that is almost exactly what happens too.
There have been a very large number of CSPRNGs for which there does not exist any known practical method to distinguish them from TRNGs.
For all of them there are theoretical methods that can distinguish a sequence generated by them from a random sequence, but all such methods require an impossible amount of work.
For instance, a known distinguisher for the Keccak function that is used inside SHA-3 requires an amount of work over 2^1500 (which was notable because it was an improvement over a naive method that would have required an amount of work of 2^1600).
This is a so ridiculously large number in comparison with the size and age of the known Universe, that it is really certain that nobody will ever run such a test and find a positive result.
There are a lot of other such CPRNGs for which the best known distinguishers require a work of over 2^100, or 2^200, or even 2^500, and for those it is also pretty certain that no practical tests will find statistical defects.
There are a lot of CSPRNGs that could not be distinguished from TRNGs even by using hypothetical quantum computers.
Even many of the pretty bad cryptographic PRNGs, which today are considered broken according to their original definitions, can be made impossible to distinguish from TRNGs by just increasing the number of iterations in their mixing functions. This is not done because later more efficient mixing functions have been designed, which achieve better mixing with less work.
Does there exist a single MCMC example that performs poorer when fed by a CSPRNG (with any fixed seed, including all zeroes; no state reuse within the simulation) as opposed to any other RNG source?
You can roll a script to do this, something that would consume a mic from Pipewire when triggered and then push results to clipboard. With a Parakeet ONNX model in between.
I had cause to do the the opposite: Hotkey -> clipboard TTS
I would not recommend that one trust a secure enclave with full disk encryption (FDE). This is what you are doing when your password/PIN/fingerprint can't contain sufficient entropy to derive a secure encryption key.
The problem with low entropy security measures arises due to the fact that this low entropy is used to instruct the secure enclave (TEE) to release/use the actual high entropy key. So the key must be stored physically (eg. as voltage levels) somewhere in the device.
It's a similar story when the device is locked, on most computers the RAM isn't even encrypted so a locked computer is no major obstacle to an adversary. On devices where RAM is encrypted the encryption key is also stored somewhere - if only while the device is powered on.
RAM encryption doesn’t prevent DMA attacks and perofming a DMA attack is quite trivial as long as the machine is running. Secure enclaves do prevent those and they're a good solution. If implemented correctly, they have no downsides. I'm not referring to TPMs due to their inherent flaws; I’m talking about SoC crypto engines like those found in Apple’s M series or Intel's latest Panther Lake lineup. They prevent DMA attacks and side-channel vulnerabilities. True, I wouldn’t trust any secure enclave never to be breached – that’s an impossible promise to make even though it would require a nation-state level attack – but even this concern can be easily addressed by making the final encryption key depend on both software key derivation and the secret stored within the enclave.
I recommend reading the AES-XTS spec, in particular the “tweak”. Or for AES-GCM look at how IV works.
I also recommend looking up PUF and how modern systems use it in conjunction with user provided secrets to dervie keys - a password or fingerprint is one of many inputs into a kdf to get the final keys.
The high level idea is that the key that's being used for encryption is derived from a very well randomized and protected device-unique secret setup at manufacturing time. Your password/fingerprint/whatever are just adding a little extra entropy to that already cryptographically sound seed.
Tl;dr this is a well solved problem on modern security designs.
> I recommend reading the AES-XTS spec, in particular the “tweak”. Or for AES-GCM look at how IV works.
What does this have to with anything? Tweakable block ciphers or XTS which converts a block cipher to be tweakable operate with an actualized key - the entropy has long been turned into a key.
> Your password/fingerprint/whatever are just adding a little extra entropy to that already cryptographically sound seed.
Correct. The "cryptographically sound seed" however is stored inside the secure enclave for anyone with the capability to extract. Which is the issue I referenced.
And if what you add to the KDF is just a minuscule amount of entropy you may as well have added nothing at all - they perform the addition for the subset of users that actually use high entropy passwords and because it can't hurt. I don't think anyone adds fingerprint entropy though.
> The "cryptographically sound seed" however is stored inside the secure enclave for anyone with the capability to extract.
Sorry, I'm not sure I follow here. Is anyone believed to have the capability to extract keys from the SE?
The secure enclave (or any Root of Trust) do not allow direct access to keys, they keep the keys locked away internally and use them at your request to do crypto operations. You never get direct access to the keys. The keys used are protected by using IVs, tweaks, or similar as inputs during cryptographic operations so that the root keys can not be derived from the ciphertext, even if the plaintext is controlled by an attacker and they have access to both the plaintext and ciphertext.
Is your concern the secure enclave in an iPhone is deflatable, and in such a way as to allow key extraction of device unique seeds it protects?
Do you have any literature or references where this is known to have occurred?
Tone is sometimes hard in text, so I want to be clear, I'm legit asking this, not trying to argue. If there are any known attacks against Apple's SE that allow key extraction, would love to read up on them.
> Is your concern the secure enclave in an iPhone is deflatable, and in such a way as to allow key extraction of device unique seeds it protects?
This is a safe assumption to make as the secret bits are sitting in a static location known to anyone with the design documents. Actually getting to them may of course be very challenging.
> Do you have any literature or references where this is known to have occurred?
I'm not aware of any, which isn't surprising given the enormous resources Apple spent on this technology. Random researchers aren't very likely to succeed.
The way bitcoin works is that addresses are hashes of a public key.
This technically allows for an emergency measure in case ECC is broken by a quantum computer:
The [unknown] public key becomes the private key. The signature becomes a ZKP of this key. I believe this has been proposed before as well.
The signature sizes are going to be a big problem is this scenario however, consensus may actually do something up to alleviate this in extremis. And also the people who have coins in addresses for which the public keys are known will be screwed, but then that's how everyone will know there is a problem - it's unlikely early cryptoraphically-relevant quantum computers (CRQC) will be able to front-run bitcoin transactions.
This is why true zero-knowledge systems for this sort of thing aren't practical and will never be. Because a SINGLE leak will break it and there will be no way to even detect it.
The attestation systems you reference don't even allow true zero knowledge attestation, they involve a trusted intermediary to convert your burned-in private key to a temporary key which you use for attestation with a third party.
And the temporary key isn't even a product of a blind signature. And it's rate limited. So if a service selling these temporary keys shows up they will be able to easily trace it to the burned-in key responsible - then revoke it and if possible initiate legal action.
This also means that whenever you register to a service using one of these schemes you are registering with your real identity, it's only a question of how hard and how many parties need to collude to extract it.
And in the event that they really do blindly sign tokens generated on your device, then their scheme will not survive adoption. As it gets adopted, the value of these blind signatures will rise and services that sell them will pop up. There will be no way of tracing the sold blind signature to the compromised/colluding device and rate limiting will merely necessitate a farm of such devices as opposed to a single leaked key.
*Note that Blind Signatures are Zero Knowledge.
reply