> That OMB rule, in turn, defines "consensus" as follows: "general agreement, but not necessarily unanimity, and includes a process for attempting to resolve objections by interested parties, as long as all comments have been fairly considered, each objector is advised of the disposition of his or her objection(s) and the reasons why, and the consensus body members are given an opportunity to change their votes after reviewing the comments".
IETF consensus does not require that all participants agree although
this is, of course, preferred. In general, the dominant view of the
working group shall prevail. (However, it must be noted that
"dominance" is not to be determined on the basis of volume or
persistence, but rather a more general sense of agreement.) Consensus
can be determined by a show of hands, humming, or any other means on
which the WG agrees (by rough consensus, of course). Note that 51%
of the working group does not qualify as "rough consensus" and 99% is
better than rough. It is up to the Chair to determine if rough
consensus has been reached.
The goal has never been 100%, but it is not enough to merely have a majority opinion.
And to add to that, the blurb you link notes explicitly that for IETF purposes, "rough consensus" is reached when the Chair determines is has been reached.
Yes, but WG chairs are supposed to help. One way to help would have been to do a consensus call on the underlying controversy. Still, I think the chair is in the clear as far as the rules go.
The standard used in the C and C++ committees is essentially a 2-to-1 majority in favor. I'm not aware of any committee where a 3-to-1 majority is insufficient to get an item to pass.
DJB's argument that this isn't good enough would, by itself, be enough for me to route his objections to /dev/null; it's so tedious and snipey that it sours the quality of his other arguments by mere association. And overall, it gives the impression of someone who is more interested in derailing the entire process than in actually trying to craft a good standard.
Standards - especially security-critical ones - shouldn't be a simple popularity contest.
DJB provided lengthy, well-reasoned, and well-sourced arguments against adoption with his "nay" vote. The "aye" votes didn't make a meaningful counter-argument - in most cases they didn't even bother to make any argument at all and merely expressed support.
This means there are clearly unresolved technical issues left - and not just the regular bikeshedding ones. If he'd been the only "nay" vote it might've been something which could be ignored as a mad hatter - but he wasn't. Six other people agreed with him.
Considering the potential conflict of interest, the most prudent approach would be to route the unsubstantiated aye-votes to /dev/null: if you can't explain your vote, how can we be sure your vote hasn't been bought?
So there's a controversial feature added in C2y, named loops, that has spawned many a vociferous argument. Now, I'm a passionate supporter of this feature, for various reasons, that I can (and have, in the committee) brought up. And I know some people who are against this feature, for various reasons that have been brought up. And at the end of the day, it kind of is a popularity contest because weighing an argument of "based on my experience, this is going to be confusing for users" versus "based on my experience, this is not going to be confusing for users" is just a popularity contest among the voters on the committee, admittedly weighted by how much you trust the various people.
And then there's a third category of person (really, just one person I think, though). This is responsible for the vast majority of the email traffic on the topic. They're always ready with a detailed point-by-point reply of any replies to their posts. And their argument is... um... they don't like the feature. And they so don't like the feature that they're hanging on to any scintilla of a process argument to make their displeasure derail the entire feature, without really being able to convince anybody else of their dislike (or being able to be convinced to change their mind to any argument).
Now I don't have the cryptographic chops to evaluate DJB's arguments myself. But I also haven't seen any support for his arguments from people I'd trust to be able to evaluate them. And the way he's responding at this point reminds me very much of that third category of people, which is adversely affecting his credibility at this point.
The really big difference between named loops and cryptography is that if one gets approved and is bad, a couple new programmers get confused, while with the other, a significant chunk of the internet becomes vulnerable to hacking.
Just because a feature is standardized does not mean it gets implemented. This is actually even more true for cryptography than it is for programming language specifications.
The question at hand is whether the IETF will publish an Informational (i.e., non-standard) document defining pure-MLKEM in TLS or whether people will have to read the Internet-Draft currently associated with the code point.
> Just because a feature is standardized does not mean it gets implemented.
This makes no sense. If you think it actually had a high chance of remaining unimplemented it anyway then why not just concede the point and take it out? It sure looks like you're not fine with leaving it unimplemented, and you're doing this because you want it implemented, no? It makes no sense to die on that hill if you're gonna tell people it might not exist.
Also, how do you just completely ignore the fact that standards have been weakened in the past precisely to achieve their implementation? This isn't a hypothetical he's worried about, it has literally happened. You're just claiming it's false despite history blatantly showing the opposite because... why? Because trust me bro?
> So there's a controversial feature added in C2y, named loops, that has spawned many a vociferous argument. (...it) is just a popularity contest
Thankfully cryptography design isn't programming language design, what we have here neither is nor should be a debate or contest over popularity, and the costs of being wrong are enormously different between the two, so you can just sleep easy knowing that your experience doesn't extrapolate to the situation at hand.
There was a recent discussion within the C committee over what exactly constituted consensus owing to a borderline vote that was surprisingly ruled "no consensus" (and the gravitas of the discussion was over the difference between a "no" and an "abstain" vote for consensus purposes). The decision was that it had to be a ⅔ favor/(favor + against), and ¾ (favor + neutral) / (favor + against + neutral). These are the actual rules of the committee now for determining consensus. Similar rules exist for the C++ committee.
If there is any conflation going on, I am not the one doing it.
“ Working groups make decisions through a "rough consensus" process.
IETF consensus does not require that all participants agree although
this is, of course, preferred. In general, the dominant view of the
working group shall prevail. (However, it must be noted that
"dominance" is not to be determined on the basis of volume or
persistence, but rather a more general sense of agreement.) Consensus
can be determined by a show of hands, humming, or any other means on
which the WG agrees (by rough consensus, of course). Note that 51%
of the working group does not qualify as "rough consensus" and 99% is
better than rough. It is up to the Chair to determine if rough
consensus has been reached.”
It's literally the ethos of the IETF going back to (at least) the late 1980s, when this was the primary contrast between IETF standards process vs. the more staid and rigorous OSI process. It's not usefully up for debate.
You may misunderstand how the IETF works. Participation is open. This means that it is possible that people who want the work to fail for their own reasons rather than technical merit can join and attempt to sabotage work.
So consensus by your definition is rarely possible given the structure of the organization itself.
This is why there are rough consensus rules, and why there are processes to proceed with dissent. That is also why you have the ability to temporarily ban people, as you would have with pretty much any well-run open forum.
It is also important to note that the goal of IETF is also to create interoperable protocol standards. That means the work in question is a document describing how to apply ML-KEM to TLS in an interoperable way. It is not a discussion of whether ML-KEM is a potentially risky algorithm.
DJB regularly acts like someone who is attempting to sabotage work. It is clear here that they _are_ attempting to prevent a description of how to use ML-KEM with TLS 1.3 from being published. They regularly resort to personal attacks when they don't get their way, and make arguments that are non-technical in nature (e.g. it is NSA sabotage, and chairs are corrupt agents). And this behavior is self-documented in their blog series.
DJB's behavior is why there are rules for how to address dissent. Unfortunately, after decades DJB still does not seem to realize how self-sabotaging this behavior is.
> the work in question is a document describing how to apply ML-KEM to TLS in an interoperable way. It is not a discussion of whether ML-KEM is a potentially risky algorithm.
In my experience, the average person treats a standard as an acceptable way of doing things. If ML-KEM is a bad thing to do in general, then there should not be a standard for it (because of the aforementioned treatment by the average person).
> It is clear here that they _are_ attempting to prevent a description of how to use ML-KEM with TLS 1.3 from being published.
It's unclear why trying to prevent a bad practice from being standardized is a bad thing. But wait, how do we know whether it's a good or bad practice? Well, we can examine the response to the concerns DJB raised: Whether the responses satisfactorily addressed the concerns, and whether the responses followed the rules and procedures for resolving each of those concerns.
> They regularly resort to personal attacks when they don't get their way
This is certainly unfortunate, but 6 other parties upheld the concerns. DJB is allowed to be a jerk, even allowed to be banned for abusive behavior IMO, however the concerns he initially raised must nonetheless be satisfactorily addressed, even with him banned. Banning somebody is sometimes necessary, but is not an acceptable means of suppressing valid concerns, especially when those concerns are also held by others who are not banned.
> DJB's behavior is why there are rules for how to address dissent.
The issue here seems to be that the bureaucracy might not be following those rules.
(1) in this case, an identity issuer provides the source of truth identity information. Examples include state DMV, your passport (you can try "Id pass" in Google wallet), etc.
(2) One of the goals of this project was to layer ZK on top of current identity standards that DMVs already issue, so that gov orgs don't have to change what they currently do to support the strongest user privacy. One example format is called Mdoc.
(3) The user holds the identity information on their device only. No other copies. The user's device makes the zkp proof on-device. This was one of the major technical challenges.
(4) The relying party (eg a website) runs the zk verification algorithm on the proof that is produced by the device to ensure soundness.
(5) Yes, the user can use any compatible implementation to produce the proof. We have open-sourced our implementation and we have a spec for the proof format that others can also reimplement.
If you can achieve RCE on the chip and run arbitrary code without invalidating signatures, does the protocol still stay secure?
If so, what's the point of requiring your implementation to run on a verified secure element? If not, the protocol seems only as strong as the weakest chip, as obtaining just a single private key from a single chip would let you generate arbitrary proofs.
The role of the secure element is only to "bind" the credential to the device, so that if you copy the credential somewhere else then the credential is useless. Concretely, the secure element produces a ECDSA signature that must be presented together with the credential. This is the normal protocol without ZKP. Concretely, the SE is in the phone, but could be a yubikey or something else.
The ZKP library does not run on the secure element. It runs on the normal CPU and produces a proof that the ECDSA signature from the SE is valid (and that the ECDSA signature from the issuer is valid, and that the credential has not expired, and ...) If you crack the ZKP library, all you are doing is producing an incorrect proof that will not verify.
Am I correctly understanding that I'd get the credential from say my state DMV once, and then later whenever I want to prove my age to a website the proof protocol is just between that website and my device? The DMV gets no information about what websites I use the DMV credential with and they get no information about when I use the credential even if the website and the DMV decide to cooperate? All they would be able to get was that at time T someone used a credential on the site that came from the DMV?
I tried to sketch out a design an age verification system, but it involved the DMV in each verification, which made timing attacks a problem. Briefly the website would issue a token, you'd get a blind signature of the token from the DMV's "this person is 18+" service, and return the token and unblinded signature to the website. I think that can be made to work but if the site and DMV cooperated they would likely be able to unmask many anonymous site users by comparing timing.
Getting the DMV out of the picture once your device is set up with the credential from them nicely eliminates that problem.
You are correct. The property that the colluding website and DMV still cannot identify you is called "unlinkability" and as far as I can tell cannot be achieved without zero-knowledge proofs. See https://github.com/user-attachments/files/15904122/cryptogra... for a discussion on this issue.
However, the timing attack resurfaces once you allow the DMV to revoke credentials. Exactly how the revocation is done matters. We are actively pushing back against solutions that require the DMV to be contacted to verify that the credential has not been revoked at presentation time, but this is a very nuanced discussion with inevitable tradeoffs between privacy and security.
>> The DMV gets no information about what websites I use the DMV credential with and they get no information about when I use the credential even if the website and the DMV decide to cooperate?
> You are correct. The property that the colluding website and DMV still cannot identify you is called "unlinkability" and as far as I can tell cannot be achieved without zero-knowledge proofs.
Well, no. This is true only if you trust the unverifiable wallet software on your phone, which was provided by a for-profit, American big tech advertising company. In this protocol, the wallet may secretly leak the transaction details back to the DMV or whoever else they wish[1].
MatteoFrigo is suggesting that unlinkability requires ZKPs.
Your observation that a bad wallet could compromise unlinkability is not a refutation of that. To refute it you need to show that it is possible to achieve unlinkability without using a ZKP.
One part that I don't understand yet: How does the system ensure "sybil resistance"? (not sure if that's the right term in that context)
By providing both attestation of individual attributes combined with "unlikability", how would even a single verifying party ensure that different attestations don't come from the same identity?
E.g. In the case of age attestation a single willing dissenting identity could set up a system to mint attestations for anyone without it being traceable back to them, right? Similar to how a single of-age person could purchase beer for all their under age friends (+ without any feat of repercussions.
Great question. The current thinking, at least in high level-of-assurance situations, is this. The identity document is only usable in cooperation with a hardware security element. The relying party picks a random nonce and sends it to the device. The device signs the nonce using the SE, and either sends the signature back to the relying party (in the non-ZKP case), or produces a ZKP that the signature is correct. The SE requires some kind of biometric authentication to work, e.g. fingerprint. So you cannot set up a bot that mints attestations. (All this has nothing to do with ZKP and would work the same way without ZKP.)
In general there is a tradeoff between security and privacy, and different use cases will need to choose where they want to be on this spectrum. Our ZKP library at least makes the privacy end possible.
That seems a bit like a game of whack-a-mole where as long as the forging side is willing to go further and further into out-of-hardware emulation (e.g. prosthetic finger on a robot hand to trick fingerprint scanners), they are bound to win. Biometrics don't feel like they hold up much if you can have collusion without fear of accountability.
> Our ZKP library at least makes the privacy end possible.
Yes, that's also one of the main things that make me excited about it. I've been following the space for quite some time now, and I'm happy that it becomes more tractable for standard cryptographic primitives and thus a lot more use-cases.
Thanks for your contributions to the space and being so responsive in this thread!
No. ZK has a technical definition I don't want to get into, but note that the described system is deterministic and it always produces the same proof for Alice on a given day, and the proof for a later day can be derived from the proof for an earlier day. So two proofs can be linked back to Alice, and thus the system is not ZK. You need some kind of randomness for ZK.
Thanks for the reply. So in theory, I could get this MDOC file and store it on my desktop computer, and use an open-source library whose behavior I can verify, to provide the proof to the website via my web browser. Yeah? This sounds good to me.
No. Using the MDOC requires a signature from a hardware security key in the phone, and a lot of the complexity is how to avoid leaking the private key, which would identify you.
An alternative would be some secure chip in a credit-card size plastic document, but nobody seems to like that idea. We (Google) don't make these choices.
Another approach could be for a component in the protocol that I do trust (eg an open source web browser) to serve as an intermediary, providing only the information required to each of the components that I don't trust (wallet, website). The wallet does not need to know who is requesting the proof, right?
I hear you. The main problem is how to prevent you from giving your document to somebody else, and things have converged on certified smartphone with security key plus biometrics.
Yeah, Passkeys are doing the same thing, expecting users to just blindly trust American Big Tech companies. It's distressing that no one working on these protocols considers the developers of the software that implements the protocol to be a party in the protocol. What are the wallet provider's interests in this exchange? How can the user be protected from the wallet provider? Seems no one asks these questions :(
Anyone can implement passkeys. The feature where passkeys can be made to attest to the hardware provider is optional and no site I've used requires it. Firefox defaults to not allowing passkeys to attest to the hardware unless you click through a permission dialog.
I don't want to get into a Passkey derail, but no. The Passkey spec requires clients to handle the user's own data in certain ways, and the Passkey spec authors threaten clients that allow users to manage their own data with client bans.
Are you trying to say that there’s a signed blob called an MDOC, that happens to have the age and name of the user, and this library allows a website to prove that the provided age belongs to the person with the MDOC, but not also see the name?
But to be clear, mdoc already accounts for this through its selective disclosure protocol, without the need for a zero knowledge proof technology. When you share an mdoc you are really just sharing a signed pile of hashes ("mobile security object") and then you can choose which salted pre-images to share along with the pile of hashes. So for example your name and your birth date are two separate data elements and sharing your MSO will share the hashes for both, but you might only choose to share the pre-image representing your birthday, or even a simple boolean claim that you are over 21 years old.
What you don't get with this scheme (and which zero knowledge proofs can provide) is protection against correlation: if you sign into the same site twice or sign into different sites, can the site owners recognize that it is the same user? With the design of the core mdoc selector disclosure protocol, the answer is yes.
This is what I was going to post. It helped me a lot by first giving a very intuitive understanding of the concept of ZKPs using the Where's Waldo/puffin-among-the-penguins example, but then also going deeper with the graph-coloring example.
Was looking to see if someone posted this video. The first few interviews are excellent - the later ones, not so much (in terms of explaining ZK - they're good chats, of course).
"Thousands of scientists are cutting back from Twitter"
Essentially, a survey published in Nature about twitter usage among scientists dropping, in part, because of the phenomena mentioned in this New Yorker article.
Eg:
"Žiga Malek, an environmental scientist at the Free University of Amsterdam, mentioned in the survey that he had started seeing a lot of “strange” political far-right accounts espousing science denialism and racism in his feed. He has to block them constantly. “Twitter has always been not so nice let’s say, but it is a mess right now,” he said."
Scientists are on average a little left-leaning but science denialism, somehow, is something that crops up in both far-left and far-right spaces (just not equally). If you were in a leftist bubble, you’d have been seeing science denialism for a while now. Far-left science denialism starts with the “only use natural products, chemicals are bad for you, vaccines are bad for you” crunchy lifestyle accounts.
Sometimes I think the political spectrum is more of an S1 topology.
Science is funded by government, so researchers come to conclusions that support them. I'm not suggesting any conspiracy, just incentives. Researchers just see which proposals get funded, and it happens those are the ones that support the mainstream narrative that the government would like to push.
So it makes sense that if you are against the mainstream narrative, this would also pit you against some mainstream science.
Disclaimer: I'm not trying to lend support to science denialism (I'm somewhat centrist). Just trying to explain why hard liners would tend to engage in it.
I don’t think “mainstream narrative that the government would like to push” is in any way explicatory. The idea that there is some kind of mainstream narrative doesn’t seem to hold up to me. If you are doing food science you can conceivably get your funding from the FDA, USDA, NIH, or even DOD. Those agencies have their own objectives and are filled with people that have varying viewpoints about what kind of research is important or unimportant, and varying viewpoints about what the mission of their agency is.
I’m not saying that this is an apolitical process. I don’t believe that an apolitical process is possible here, even in theory, and if we imagine a world where an apolitical process were possible, I don’t think such a process would be desirable. I’m also not saying that the process isn’t flawed. I’m just saying that there is not a coherent, unified political force here, and in this environment, there is plenty of room for good research, good science to be done.
I would like to hear if you are basing this opinion on any of your personal experience applying for research grants, or on any informed analysis of how this works, or whether you are just making conjectures about incentives based on the same facts everyone else has.
>If you were in a leftist bubble, you’d have been seeing science denialism for a while now.
That's interesting, if you ask me: "are you in a leftist bubble" - I would answer "probably a little." But I've NEVER seen any science denialism, so I guess I'm not?
What kind of denialism are you talking about wrt leftists?
Antivax views have long been present on the left. Quack medicine (like Homeopathy) has a big market on the left. There’s also no shortage of self-diagnoses especially regarding mental illnesses there, which some have taken to adoptint extreme covid paranoia, contrary to actual scientific guidelines.
The default feed is "For You" which absolutely includes people/posts you never followed. Users have to explicitly choose the "Following" feed to only see those posts. And, the setting resets itself constantly. If you aren't diligent, you get the firehose whether you wanted it or not.
While I realize this does nothing for mobile (RIP third-party clients), Control Panel for Twitter [1] has been nice for me to use as a browserscript. Defaults/hides "For You" and tweaks a bunch of other stuff (hideable trends, etc).
> If you lurk on the firehose you get it all. Including all the parts you may disagree with or find despicable or... Perhaps also something nice?
I mean that wasn't the case historically.
Like if your favorite spot in town was a roller rink and it got bought out by a movie theatre. You'll lamente your loss but sure nothing illegal happened there but obviously you might rethink spending your time at the specific location.
Discovery is a feature we look for in social media platforms. Twitter offers discovery in the form of an optional more open feed. Twitters discovery feature is worse for these users now than it was before. They now feel less compelled to use the product.
What you're actually describing there is how Mastodon works. Twitter has been "algorithmic" for the longest time now and will show you whatever it thinks you should see.
Twitter hasn't change for me in years, I just follow some accounts and read their posts. I'm not interested in what any social media app thinks I will like, I don't have time to waste on that, so I don't use their "discovery" algorithms, it would be like random blogs showing up in my RSS feeds.
This is a case study for an undergrad statistics or responsible journalism class.
* for traffic see the small note "all values rebased to 100"
they are likely hiding the significance of the increase
* for the app downloads graph:
does the 30x gap say anything?
are there seasonal reasons that can explain why every Jan1--Feb4 has more DLs than Feb4--Mar11 ? e.g., new phones?
i certainly don't do clickbait. no links to the original or anything. i thought the content was this observation. i'm trying to use whisper.cpp to get transcripts for the older ones.
When did you answer this question in your post? If you readily admit that you are incapable of answering that, would you say that if you’d submitted this to HN with the title “Goosy Goosy on the 6.5” you would have gotten… fewer clicks?
What substance is there to this post aside from suggesting a podcast?