Hacker Newsnew | past | comments | ask | show | jobs | submit | Foxboron's commentslogin

Nor the 20 or so odd reimplementations of various filesystem drivers and LUKS encryption in the grub2 tree.

But, who is counting?


I'm tired of grub too. That's one of the packages on my shitlist. Currently it is broken on my system, as it has been in the past from time to time. I'm tired of the unreliability and have decided to write my own bootloader instead. It will be simple and bulletproof.

I already laid the basic foundation and have the kernel loading into memory and booting. Next step is to get the memory map and pass that along. It's BIOS only for the moment; EFI support will come later, along with other architectures. (PowerPC is next.)


> * Secure Boot (vendor-keyed deployments)

I wish this myth would die at this point.

Secure Boot allows you to enroll your own keys. This is part of the spec, and there are no shipped firmwares that prevents you from going through this process.


Android lets you put your own signed keys in on certain phones. For now.

The banking apps still won't trust them, though.

To add a quote from Lennart himself:

"The OS configuration and state (i.e. /etc/ and /var/) must be encrypted, and authenticated before they are used. The encryption key should be bound to the TPM device; i.e system data should be locked to a security concept belonging to the system, not the user."

Your system will not belong to you anymore. Just as it is with Android.


Banks do this because they have made their own requirement that the mobile device is a trust root that can authenticate the user. There are better, limited-purpose devices that can do this, but they are not popular/ubiquitous like smartphones, so here we are.

The oppressive part of this scheme is that Google's integrity check only passes for _their_ keys, which form a chain of trust through the TEE/TPM, through the bootloader and finally through the system image. Crucially, the only part banks should care about should just be the TEE and some secure storage, but Google provides an easy attestation scheme only for the entire hardware/software environment and not just the secure hardware bit that already lives in your phone and can't be phished.

It would be freaking cool if someone could turn your TPM into a Yubikey and have it be useful for you and your bank without having to verify the entire system firmware, bootloader and operating system.


Banks do this because they can. If most consumer devices did not support the tech they would not be able to.


Then work with the bank to prove the signer is trustworthy.


> This is part of the spec, and there are no shipped firmwares that prevents you from going through this process.

Microsoft required that users be able to enroll their own keys on x86. On ARM, they used to mandate that users could not enroll their own keys. That they later changed this does not erase the past. Also, I've anecdotally heard claims of buggy implementations that do in fact prevent users from changing secure boot settings.


“buggy”


Don't get me wrong, I'm happy to attribute a lot of malice to Microsoft, but in this case I really do believe that it was incompetence. Everything I've ever read about 90%+ of hardware vendors is that shipping hilariously broken firmware is an everyday occurrence for them.

(This is separate from Windows RT, of course)


This reminds me of when I enrolled only my own keys into a gigabyte AB350 and I just soft-bricked it because presumably some opt-rom required MS keys.

I exchanged it for an Asrock board and there I can enable secure boot without MS keys and still have it boot cuz they actually let you choose what level of signing the opt-rom needs when you enable secure boot.

What I want to say with this is that it requires the company to actually care to provide a good experience.


> Secure Boot allows you to enroll your own keys

UEFI secure boot on PCs, yes for the most part. A lot of mobile platforms just never supported this. It's not a myth.


Phones don't implement UEFI.


Most don't, but they're usually equivalently locked down nevertheless.


UEFI on x86_64 and phones are not comparable when it comes to being "locked down".


Are you sure?

Note that the comment you replied to does not even mention phones. Locked down Secure Boot on UEFI is not uncommon on mobile platforms, such as x86-64 tablets.


What about all those Windows on ARM laptops?


I wish the myth of the spec would die at this point.

Many motherboards secure boot implimentation violates the supposed standard and does not allow you to invalidate the pre-loaded keys you don't approve of.


> The TPM has nothing remotely resembling per-user PCRs.

The system could extend one of the PCRs, or an NVPCR, with some unique user credential locked to the user directory. Then you can't recreate the PCR records in any immediate way.

But you can't just recreate a key under one of the hierarchies anyway. You still need to posses the keyfile.


> The system could extend one of the PCRs, or an NVPCR, with some unique user credential locked to the user directory. Then you can't recreate the PCR records in any immediate way.

Sure, but can the system context-switch that PCR between two different users?


> Sure, but can the system context-switch that PCR between two different users?

Right, no it can't.

But this was not really something the TPM was suppose to solve.


> but for almost any economically important project all the major contributors and maintainers are on the payroll of one of the big tech interests or a foundation funded by them.

"almost" is the load bearing word here, and/or a weasel word. Define what an "economically important project" is.

> Also just to be clear: node is filled with povertyware and you should be extremely careful what you grab from npm.

Is "povertyware" what we call software written by people and released for free now?


> "almost" is the load bearing word here, and/or a weasel word. Define what an "economically important project" is.

Linux, clang, python, react, blink, v8, openssl... You know what I mean. I stand by what I said. Do you have a counterexample you think is clearly unfunded? They exist[1], but they're rare.

> Is "povertyware" what we call software written by people and released for free now?

It's software subject to economic coercion owing to the lack of means of its maintainership. It's 100% fine for you to write and release software for free, but if a third party bets their own product on it they're subject to an attack where I hand you $7M to look the other way while I borrow your shell.

[1] The xz-utils attack is the flag bearer for this kind of messup, obviously.


Unfunded is kind of a stretch, but at least libxml2.

Essentially "povertyware" as you call it when you consider the trillion dollar companies built on top of them? Now that's way easier: SQLite, PostgreSQL, ffmpeg, imagemagick, numpy, pandas, GTK, curl, zlib, libpng, zxing or any other popular qr/barcode library, etc...


> Linux, clang, python, react, blink, v8, openssl... You know what I mean. I stand by what I said. Do you have a counterexample you think is clearly unfunded? They exist[1], but they're rare.

For Linux "all the major contributors and maintainers are on the payroll of one of the big tech interests or a foundation funded by them" is simply not true. It's trivial to prove this by just looking at the maintainers of the subsystems. Making this claim is nonsense to begin with.

Same is true for several major contributors to the Python compiler and subsequent libraries as well.

You will move the goalpost by trying to narrow down what "major contributor" means.

> It's software subject to economic coercion owing to the lack of means of its maintainership. It's 100% fine for you to write and release software for free, but if a third party bets their own product on it they're subject to an attack where I hand you $7M to look the other way while I borrow your shell.

So without knowing anyone you are making a value judgement on the (probable?) lack of ethics? Excuse me?


> You will move the goalpost

I can't move the goalpost if you won't produce a ball. Who exactly are you thinking of that needs a job but doesn't have one?


> Who exactly are you thinking of that needs a job but doesn't have one?

That is not your claim. Your claim is that they "are on the payroll of one of the big tech interests or a foundation funded by them". Which is simply not true.

You can easily find several maintainers of these projects doing this as their part-time hobby project, have cut a deal at work or simply don't work at place that funds Linux development.

I'm not going to call out individual I know the situation and/or their employment history.


So blocking Kiwifarms took.. months of activism and loud complaining. Heraled by Matthew as "this is an extraordinary decision for us to make and, given Cloudflare's role as an Internet infrastructure provider, a dangerous one that we are not comfortable with".

However a fine that amounts to ~0.7% of the annual revenue and they threaten to block an entire country?


Actually, the fine amounts to over 200% of Italy-sourced revenue ($17 million fine vs. $8 million in revenue in 2024). Why would you continue doing business in Italy?


They are a conglomerate and per Matthews words "an internet infrastructure provider". Why does the local revenue matter when they are serving a global market?

EDIT: And fwiw, "Why would you continue doing business in Italy?" is not what is being proposed. They are threatening to block 55 million people from ~20% of the world wide web.


They're threatening to remove servers from Italy. They're explicitly NOT threatening to block Italians from being able to access sites through Cloudflare.

I have my fair share of problems with CF, but I assume here that they're threatening higher latency (i.e. requests from Italian users would have to go to a neighboring country to be routed) rather than blocking.


Also Italy would see (very slightly) lower GDP because data centers would have less demand from CF.


How freaking expensive do you think infrastructure is? It's not that expensive, and certainly not anywhere close to the point where it would make a noticeable impact on GDP.


Every little bit counts. At cloudflares scale it could be the difference between a DC having to close up shop or not.


> EDIT: And fwiw, "Why would you continue doing business in Italy?" is not what is being proposed. They are threatening to block 55 million people from ~20% of the world wide web.

There is no mention of blocking people in Italy from using sites protected by Cloudflare. From the tweet:

> we are considering the following actions: 1) discontinuing the millions of dollars in pro bono cyber security services we are providing the upcoming Milano-Cortina Olympics; 2) discontinuing Cloudflare’s Free cyber security services for any Italy-based users; 3) removing all servers from Italian cities; and 4) terminating all plans to build an Italian Cloudflare office or make any investments in the country.


If they do not want to comply with introducing censorship, then withdrawing from Italy is the only other option. Italian citizens and residents are unfortunately collateral damage.


Because they only violated the "law" in a local market (Italy) .


And the correct response to that is to write up a threat towards the entire population of a country?


What else could they do? The government represent the country. If their business model is not welcome there then they withdraw. It's very fair to say "if you insist on those rules I choose not to play". They owe Italy nothing.

Btw, I recently "threatened" Switzerland to withdraw my business from there because the cost of doing business there (complying with their VAT regulation) is higher than my revenue from there (maybe 1-2 licenses a year). The whole Switzerland will not be able to buy my software because of that. I didn't think of posting about it on Twitter though.


> What else could they do? The government represent the country. If their business model is not welcome there then they withdraw. It's very fair to say "if you insist on those rules I choose not to play".

They can just not threaten the population of Italy? They are a 2 billion dollar company that has apparently scheduled a meeting with the vice president of the US on short notice? This is going to be resolved politically.

> Btw, I recently "threatened" Switzerland to withdraw my business from there because the cost of doing business there (complying with their VAT regulation) is higher than my revenue from there (maybe 1-2 licenses a year). The whole Switzerland will not be able to buy my software because of that. I didn't think of posting about it on Twitter though.

You have not given "free services" to 20% of the world wide web that you are now using as leverage.


Politic is not separate from the population though. Pressure from the population (hopefully) sways political decisions. This is why google news pulling out of countries were public.


how would you not threat? Are you willing to donate $ for cloudflare to operate there with such fines?


It absolutely is. Why should people receive a free service while their democratically elected officials enact laws that enable them to target global revenue in their fines?


Not the whole population. Only those using cloudflare to protect their websites?


How much revenue did Kiwifarms bring in?


Yeah that makes sense to me. If you come up to me and say “you have to arrest that guy; he’s stealing from me” I have to do a lot of research to make sure that everything is correct.

On the other hand, if I see you steal from me, I don’t have to do a lot of research. I am a first party to the thing. I can be sure.

It’s the difference between a policeman arriving on the scene of an assault and someone actually assaulting the policeman.

The acting party being the affected party simplifies things because you know you’re not a “confused deputy”.


He isn't threatening to block Italy, just to remove cloudflare's business from there. Anyone living and surfing from Italy would not be blocked by cloudflare from accessing any service provided by cloudflare.


How do you not understand the difference..?


> So blocking Kiwifarms took.. months of activism and loud complaining.

Kiwifarms isn't a pirate site. It's just another site that you think is legitimate to censor.

> However a fine that amounts to ~0.7% of the annual revenue and they threaten to block an entire country?

What's going to be next weeks fine? Of course they should block the entire country. Even if they pay the fine (I could imagine there's some way that the EU could force that on pain of forcing them out of Europe), they should block the country.

Shouldn't Italy want lawbreakers to leave?


>activism and loud complaining

I'm not sure why would you want to remind the world about that episode. those men lied, stalked, harassed, and threatened a lot of people to get that perfectly legal website exposed to very illegal DDoS attacks.


> It's unfortunate that people don't really know about it, but I guess the tools available aren't that user friendly

This is my cue.

https://github.com/Foxboron/ssh-tpm-agent


Thank you for sharing!


https://streaming.media.ccc.de/39c3

All talks will be live streamed, and right after the talk is done you have a rough cut available instantly under "re-live" you can watch until the final recording is available; https://streaming.media.ccc.de/39c3/relive

The final recording will appear under a day or two after the talk is held: https://media.ccc.de/c/39c3

EDIT: A different variant of the schedule with better filtering is available here: https://events.ccc.de/congress/2025/hub/en/schedule

I should note that some talks will not be recorded, and only available at the congress. These are clearly marked on the congress hub website, but not easily available on the fahrplan view.


I made https://fahrplan.cc where you can filter the [not] recorded sessions, categories, and titles.

I've mostly made it for myself to skip the recorded sessions when on-site and to see what's coming up at the current time of day. It therefore tries to include all the self organized sessions, workshops, meetups, music programs, etc. I've been running it for a few years and people use it for all kinds of use cases, including sitting at home and watching the streams.


I like your tool, but the schedule in the "hub" can now also filter for "recorded":

https://events.ccc.de/congress/2025/hub/de/schedule?mode=lis...


Ah, the filterable schedule would be even better if you could filter on multiple categories at once. I just want security/hardware/science, and then I would have to constantly switch around, which is worse than looking at the full schedule with the other categories included.


You can have multiple tabs open.


Paged Out are looking for more articles for the next issue. Information here: https://pagedout.institute/?page=cfp.php


Thank you for letting us know of it here—sent them a pitch on the hidden vision math in color contrast fixes (FOSS lib: https://github.com/comfort-mode-toolkit/cm-colors). Fingers crossed! :>


Thanks! I pre-approved it today, so you probably already received a reply from our Editor-in-Chief :)


I have published 3 articles and already sent the 4th, I invite everyone to join the discord server as we discuss also articles proposals :-)


I was asked earler if I was willing to write but a severe case of imposter syndrome has prevented me from doing so.

Maybe joining the discord is a suitable first step...


A surprisingly common misconception about Paged Out! is that we have some super strict acceptance policy. In reality we almost never reject articles and try to work with authors to improve them if there's any need for that ;)

But if you feel worried, our Discord is open, and you can also just email us with an article topic idea and we'll give you feedback on it.


> See for example the many problems of NIST P-224/P-256/P-384 ECC curves

What are those problems exactly? The whitepaper from djb only makes vague claims about NSA being a malicious actor, but after ~20 years no known backdoors nor intentional weaknesses has been reliably proven?


As I understand it, a big issue is that they are really hard to implement correctly. This means that backdoors and weaknesses might not exist in the theoretical algorithm, but still be common in real-world implementations.

On the other hand, Curve25519 is designed from the ground up to be hard to implement incorrectly: there are very few footguns, gotchas, and edge cases. This means that real-world implementations are likely to be correct implementations of the theoretical algorithm.

This means that, even if P-224/P-256/P-384 are on paper exactly as secure as Curve25519, they could still end up being significantly weaker in practice.


I tried to defend a similar argument in a private forum today and basically got my ass handed to me. In practice, not only would modern P-curve implementations not be "significantly weaker" than Curve25519 (we've had good complete addition formulas for them for a long time, along with widespread hardware support), but Curve25519 causes as many (probably more) problems than it solves --- cofactor problems being more common in modern practice than point validation mistakes.

In TLS, Curve25519 vs. the P-curves are a total non-issue, because TLS isn't generally deployed anymore in ways that even admit point validation vulnerabilities (even if implementations still had them). That bit, I already knew, but I'd assumed ad-hoc non-TLS implementations, by random people who don't know what point validation is, might tip the scales. Turns out guess not.

Again, by way of bona fides: I woke up this morning in your camp, regarding Curve25519. But that won't be the camp I go to bed in.


I agree that Curve25519 and other "safer" algorithms are far from immune to side channel attacks in their implementation. For example, [1] is a single trace EM side channel key recovery attack against Curve25519 implemented in MbedTLS on an ARM Cortex-M4. This implementation had the benefit of a constant-time Montgomery ladder algorithm that NIST P curve implementations have traditionally not had a similar approach for, but nonetheless failed due to a conditional swap instruction that leaked secret state via EM.

The question is generally, could a standard in 2025 build upon decades of research and implementation failures to specify side channel resistant algorithms to address conditional jumps, processor optimisations for math functions, etc which might leak secret state via timing, power or EM signals. See for example section VI of [1] which proposed a new side channel countermeasure that ended up being implemented in MbedTLS to mitigate the conditional swap instruction leak. Could such countermeasures be added to the standard in the first instance, rather than left to implementers to figure out based on their review of IACR papers?

One could argue that standards are simply following interests of standards proposers and organisations who might not care about cryptography implementations on smart cards, TPMs, etc, or side channel attacks between different containers on the same host. Instead, perhaps standards proposers and organisations only care about side channel resistance across remote networks with high noise floors for timing signals, where attacks such as [2] (300ns timing signal) are not considered feasible. If this is the case, I would argue that the standards should still state their security model more clearly, for example:

* Is the standard assuming the implementation has a noise floor of 300ns for timing signals, 1ms, etc? Are there any particular cryptographic primitives that implementers must use to avoid particular types of side channel attack (particularly timing)?

* Implementation fingerprinting resistance/avoidance: how many choices can an implementation make that may allow a cryptosystem party to be deanonymised by the specific version of a crypto library in use?[3] Does the standard provide any guarantee for fingerprinting resistance/avoidance?

[1] Template Attacks against ECC: practical implementation against Curve25519, https://cea.hal.science/cea-03157323/document

[2] CVE-2024-13176 openssl Timing side-channel in ECDSA signature computation, https://openssl-library.org/news/vulnerabilities/index.html#...

[3] Table 2, pyecsca: Reverse engineering black-box ellipticcurve cryptography via side-channel analysis, https://tches.iacr.org/index.php/TCHES/article/view/11796/11...


> As I understand it, a big issue is that they are really hard to implement correctly.

Any reference for the "really hard" part? That is a very interesting subject and I can't imagine it's independent of the environment and development stack being used.

I'd welcome any standard that's "really hard to implement correctly" as a testbed for improving our compilers and other tools.


I posted above, but most of the 'really hard' bits come from the unreasonable complexity of actual computing vs the more manageable complexity of computing-with-idealized-software.

That is, an algorithm and compiler and tool safety smoke test and improvement thereby is good. But you also need to think hard about what happens when someone induces an RF pulse at specific timings targeted at a certain part of a circuit board, say, when you're trying to harden these algorithmic implementations. Lots of things that compiler architects typically say is "not my problem".


It would be wise for people to remember that it’s worth doing basic sanity checks before making claims like no backdoors from the NSA. strong encryption has been restricted historically so we had things like DES and 3DES and Crypto AG. In the modern internet age juniper has a bad time with this one https://www.wired.com/2013/09/nsa-backdoor/.

Usually it’s really hard to distinguish intent, and so it’s possible to develop plausible deniability with committees. Their track record isn’t perfect.

With WPA3 cryptographers warned about the known pitfall of standardizing a timing sensitive PAKE, and Harkin got it through anyway. Since it was a standard, the WiFi committee gladly selected it anyway, and then resulted in dragonbleed among other bugs. The techniques for hash2curve have patched that


It's "Dragonblood", not "Dragonbleed". I don't like Harkin's PAKE either, but I'm not sure what fundamental attribute of it enables the downgrade attack you're talking about.

When you're talking about the P-curves, I'm curious how you get your "sanity check" argument past things like the Koblitz/Menezes "Riddle Wrapped In An Enigma" paper. What part of their arguments did you not find persuasive?


yes dragon blood. I’m not speaking of the downgrade but the timing sidechannels — which were called out very loudly and then ignored during standardization. and then the PAKE showed up in wpa3 of all places, that was the key issue and was extended further in a brain pool curve specific attack for the proposed initial mitigation. It’s a good example of error by committee I do not address that article and don’t know why the NSA advised migration that early.

The riddle paper I’ve not read in a long time if ever, though I don’t understand the question. As Scott Aaronson recently blogged it’s difficult to predict human progress with technology and it’s possible we’ll see shors algorithm running publicly sooner than consensus. It could be that in 2035 the NSA’s call 20 years prior looks like it was the right one in that ECC is insecure but that wouldn’t make the replacements secure by default ofc


Aren't the timing attacks you're talking about specific to oddball parameters for the handshake? If you're doing Dragonfly with Brainpool curves you're specifically not doing what NSA wants you to do. Brainpool curves are literally a rejection of NIST's curves.

If you haven't read the Enigma paper, you should do so before confidently stating that nobody's done "sanity checks" on the P-curves. Its authors are approximately as authoritative on the subject as Aaronson is on his. I am specifically not talking about the question of NSA's recommendation on ECC vs. PQ; I'm talking about the integrity of the P-curve selection, in particular. You need to read the paper to see the argument I'm making; it's not in the abstract.


Ah now I see what the question was as it seemed like a non sequitur. I misunderstood the comment by foxboron to be concerns about any backdoors not that P256 is backdoored, I hold no such view of that, surely bitcoin should be good evidence.

Instead I was stating that weaknesses in cryptography have been historically put there with some NSA involvement at times.

For DB: The brain pool curves do have a worse leak, but as stated in the dragon blood paper “we believe that these sidechannels are inherent to Dragonfly”. The first attack submission did hit P-256 setups before the minimal iteration count was increased and afterward was more applicable to same-system cache/ micro architectural bugs. These attacks were more generally correctly mitigated when H2C deterministic algorithms rolled out. There’s many bad choices that were selected of course to make the PAKE more exploitable, putting the client MAC in the pre commits, having that downgrade, including brain pool curves. but to my point on committees— cryptographers warned strongly when standardizing that this could be an attack and no course correction was taken.


Can I ask you to respond to the "sanity check" argument you made upthread? What is the "sanity checking" you're implying wasn't done on the P-curves?


I wasn’t talking about P curves, I was talking about NSA having acted as a malicious actor in general so I misunderstood their comment


The NSA changed the S-boxes in DES and this made people suspicious they had planted a back door but then when differential cryptanalysis was discovered people realized that the NSA changes to S-boxes made them more secure against it.


That was 50 years ago. And since then we have an NSA employee co-authoring the paper which led to Heartbleed, the backdoor in Dual EC DRBG which has been successfully exploited by adversaries, and documentation from Snowden which confirms NSA compromise of standards setting committees.


> And since then we have an NSA employee co-authoring the paper which led to Heartbleed

I'm confused as to what "the paper which led to Heartbleed" means. A paper proposing/describing the heartbeat extension? A paper proposing its implementation in OpenSSL? A paper describing the bug/exploit? Something else?

And in addition to that, is there any connection between that author and the people who actually wrote the relevant (buggy) OpenSSL code? If the people who wrote the bug were entirely unrelated to the people authoring the paper then it's not clear to me why any blame should be placed on the paper authors.


> I'm confused

The original paper which proposed the OpenSSL Heartbeat extension was written by two people, one worked for NSA and one was a student at the time who went on to work for BND, the "German NSA". The paper authors also wrote the extension.

I know this because when it happened, I wanted to know who was responsible for making me patch all my servers, so I dug through the OpenSSL patch stream to find the authors.


What does that paper say about implementing the TLS Heartbeat extension with a trivial uninitialized buffer bug?


About as much as Jia Tan said about implementing the XZ backdoor via an inconspicuous typo in a CMake file. What's your point?


I'm asking what the paper has to do with the vulnerability. Can you answer that? Right now your claim basically comes down to "writing about CMake is evidence you backdoored CMake".


> Right now your claim basically comes down to "writing about CMake is evidence you backdoored CMake".

This statement makes it clear to me that you don't understand a thing I've said, and that you don't have the necessary background knowledge of Heartbleed, the XZ backdoor, or concepts such a plausible deniability to engage in useful conversation about any of them. Else you would not be so confused.

Please do some reading on all three. And if you want to have a conversation afterwards, feel free to make a comment which demonstrates a deeper understanding of the issues at hand.


Sorry, you're not going to be able to bluster your way through this. What part of the paper you're describing instructed implementers of the TLS Heartbeat extension to copy data into an uninitialized buffer and then transmit it on the wire?


> What part of the paper you're describing instructed implementers of the TLS Heartbeat extension to copy data into an uninitialized buffer and then transmit it on the wire?

That's a very easy question to answer: the implementation the authors provided alongside it.

If you expect authors of exploits to clearly explain them to you, you are not just ignorant of the details of backdoors like the one in XZ (CMake was never backdoored, a "typo" in a CMake file bootstrapped the exploit in XZ builds), but are naive to an implausible degree about the activities of exploit authors.

Even the University of Minnesota did not publicly state "we're going to backdoor the Linux kernel" before they attempted to do so: https://cyberir.mit.edu/site/how-university-got-itself-banne...

If you tell someone you're going to build an exploit and how, the obvious response will be "no, we won't allow you to." So no exploit author does that.


Which "paper" are you referring to?


Think the above poster is full of bologna? It's less painful for everyone involved, and the readers, to just say that and get that out of the way rather than trying to surgically draw it out over half a dozen comments. I see you do this often enough that I think you must get some pleasure out of making people squirm. We know you're smart already!


I think their argument is verkakte but I literally don't know what they're talking about or who the NSA stooge they're referring to is, and it's not so much that I want to make them squirm so much as that I want to draw the full argument out.

I think your complaint isn't with me, but with people who hedge when confronted with direct questions. I think if you look at the thread, you'll see I wasn't exactly playing cards close to my chest.


I don't make a habit of googling things for people when they could do it just as quickly themselves. There is only one paper proposing the OpenSSL heartbeat feature. So I have not been unclear, nor can there be any confusion about which it is. Perhaps we'll learn someday what tptacek expects to find or not to find in it, but he'll have to spend 30 seconds with Google. As I did.

Informing one's self is a pretty low bar for having a productive conversation. When one party can't be arsed to take the initiative to do so, that usually signals the end of useful interaction.

A comment like "I googled and found this paper... it says X... that means Y to me." would feel much less like someone just looking for an argument, because it involves effort and stating a position.

If he has a point, he's free to make it. Everything he needs is at his fingertips, and there's nothing I could do to stop him, nor would I want to. I asked for a point first thing. All I've gotten in response is combative rhetoric which is neither interesting nor informative.


Your argument that heart bleed was intentional is very weak


Means, motive, and opportunity. Seems to check all the boxes.

There's no conclusive evidence that it wasn't purposeful. And plenty of evidence of past plausibly deniable attempts. So you can believe whatever lets you sleep better at night.


Ah, that clears up the confusion. Thank you for taking the time to explain!


What's the original paper? The earliest thing I can find is an RFC.


I'm pretty sure he meant the RFC. (Insert "The German Three" meme).


The NSA also wanted a 48 bit implementation which was sufficiently weak to brute force with their power. The industry and IBM initially wanted 64 bit. IBM compromised and gave us 56 bit.


Yes, NSA made DES stronger. After first making it weaker. IBM had wanted a 128-bit key, then they decided to knock that down to 64-bit (probably for reasons related to cost, this being the 70s), and NSA brought that down to 56-bit because hey! we need parity bits (we didn't).


They're vulnerable to "High-S" malleable signatures, while ed25519 isn't. No one is claiming they're backdoored (well, some people somewhere probably are), but they do have failure modes that ed25519 doesn't which is the GP's point.


in the NIST Curve arena, I think DJB's main concern is engineering implementation - from an online slide deck he published:

  We’re writing a document “Security dangers of the NIST curves”
  Focus on the prime-field NIST curves
  DLP news relevant to these curves? No
  DLP on these curves seems really hard
  So what’s the problem?
  Answer: If you implement the NIST curves, chances are you’re doing it wrong
  Your code produces incorrect results for some rare curve points
  Your code leaks secret data when the input isn’t a curve point
  Your code leaks secret data through branch timing
  Your code leaks secret data through cache timing
  Even more trouble in smart cards: power, EM, etc.
  Theoretically possible to do it right, but very hard
  Can anyone show us software for the NIST curves done right?
As to whether or not the NSA is a strategic adversary to some people using ECC curves, I think that's right in the mandate of the org, no? If a current standard is super hard to implement, and theoretically strong at the same time, that has to make someone happy on a red team. At least, it would make me happy, if I were on such a red team.


He does a motte-and-bailey thing with the P-curves. I don't know if it's intentional or not.

Curve25519 was a materially important engineering advance over the state of the art in P-curve implementations when it was introduced. There was a window of time within which Curve25519 foreclosed on Internet-exploitable vulnerabilities (and probably a somewhat longer period of time where it foreclosed on some embedded vulnerabilities). That window of time has pretty much closed now, but it was real at the time.

But he also does a handwavy thing about how the P-curves could have been backdoored. No practicing cryptgraphy engineer I'm aware of takes these arguments seriously, and to buy them you have to take Bernstein's side over people like Neil Koblitz.

The P-curve backdoor argument is unserious, but the P-curve implementation stuff has enough of a solid kernel to it that he can keep both arguments alive.


Quite true, but the Dual_EC backdoor claim is serious. DJB's point that we should design curves with "nothing up my sleeve" is a nice touch.


See, this gets you into trouble, because Bernstein has actually a pretty batshit take on nothing-up-my-sleeve constructions (see the B4D455 paper) --- and that argument also hurts his position on Kyber, which does NUMS stuff!


Link?



There’s also a more approachable set of slides on the topic at https://cr.yp.to/talks/2025.11.14/slides-djb-20251114-safecu...


What do you think of those slides?


I didn’t see anything “batshit” in either the paper or the slides.


Say more. What do you think of his argument? I paraphrased it downthread. Do you think I did so accurately? If not: what did I get wrong?


At least in terms of the Bada55 paper, I think he writes in a fairly jocular style that sounds unprofessional unless you read his citations as well. You seem to object to his occasional jocularity and take it as prima facie evidence of him being “batshit”. Given that you are well known for a jocular writing style, perhaps you should extend some grace.

The slides seem like a pretty nice summary of the 2015-era SafeCurves work, which you acknowledge elsewhere on this site (this thread? They all blend together) was based on good engineering.


No, what I'm saying has only to do with the substance of his claims, which I now think you don't understand, because I laid them out straightforwardly (I might have been wrong, but I definitely wasn't making a tone argument) and you came back with this. People actually do work in this field. You can't just bluster your way through it.

This is a "challenge" with discussing Bernstein claims on Hacker News and places like it --- the threads are full of people who know two cryptographers in the whole world (Bernstein and Schneier) and axiomatically derive their claims from "whatever those two said is probably true". It's the same way you get these inane claims that Kyber was backdoored by the NSA --- by looking at the list of authors on Kyber and not recognizing a single one of them.

What do you think about Bernstein's arguments for SNTRUP being safe while Kyber isn't? Super curious. I barely follow. Maybe you've got a better grip on the controversy.


I’m not sure why you’re hung up on SNTRUP, since DJB didn’t submit it past round 2 of NISTPQC. In round 3, DJB put his full weight behind Classic McEliece.

You’ve previously argued that “cryptosystems based on ring-LWE hardness have been worked on by giants in the field since the mid-1990s” and suggested this is a point in Kyber’s favor. Well, news flash, McEliece has been worked on by giants in the field for 45 years. It shows up in NSA’s declassified internal history book, though their insights into the crypto system are still classified to this day.


How long do you think people have been working on lattice cryptography?


Lattices themselves have been analyzed since the days of Gauss. Lattice cryptography is only a couple decades old (in the unclassified literature).

The first proposed lattice-based cryptosystem was completely broken within 2 years of its announcement, which is an lovely harbinger of Kyber’s fate.


That's a funny claim given NTRU goes back to 1996 and was a PQC finalist. I barely know what I'm talking about here and even I think you're bluffing your way through this. At this point you're making arguments Bernstein would presumably himself reject!


Since you've been very strident throughout this thread I'm wondering if you're going to have a response to this. Similarly, I'm curious, as a scholar of Bernstein's cryptography writing --- did the MOV attack (prominently featured on Safecurves) serve as a lovely harbinger of the failure of elliptic curve cryptography?


I tried a couple searches and I forget which calculator-speak version of "BADASS" Bernstein actually used, but the concept of the paper† is that all the NUMS-style curves are suspect because you can make combinations of mathematical constants say whatever you want them to say (in combination), and so instead you should pick curve constants based purely on engineering excellence, which nobody could ever disagree about or (looks around the room) start huge conspiracy theories over.

as I remember it


Well, DJB also focused on "nothing up my sleeve" design methodology for curves. The implication was that any curves that were not designed in such a way might have something nefarious going on.


Dual_EC's backdoor can't be proven, but it's almost certainly real.


> This is why djb is in the Cypherpunks Hall of Fame! [1]

This is a list made by you 2 weeks ago?

EDIT: Okay lol. I actually browsed the list and found multiple dubious entries, along with Trump!

Hilarious list. 10/10.


what do you expect, when the tagline at the end of the page says "In crypto we trust."?

Honestly, it's a bit sad. There are many great people on that list, but some seem a bit random and some are just straight up cryptobros, which makes the whole thing a joke, unfortunately


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: