For a long time I wondered why there was such a big push for PQ even though there was no quantum computer and a reasonably working one was always 15 years in the future.
… or was there a quantum computer somewhere and it was just kept hush hush, hence the push for PQ?
The answer turns out to be: it doesn’t matter if there is a quantum computer! The set of PQ algorithms has many other beneficial properties besides quantum resistance.
The point is that a lot of secrets need to remain secrets for many years. If some government found a way to break elliptic curve in the same way that the number field seive broke rsa (hence we now need 2048-bit keys rather than 256bit keys we were using in the 90s) we’d be fucked for many years to come as all secrets are leaked.
So there may not be quantum computers now. But if there’s going to be in 20years we need our crypto to be resilient now.
I’m a physicist working on QC. I know we actually don’t know if a “secret” QC exists somewhere, but given that major theoretical and engineering breakthroughs are needed to build a fault tolerant one (and all QC companies are facing this regardless of whether their qubits optical, superconducting, trapped ions etc), I’d put that possibility to near zero. Consider also the talent and expertise that is needed for such an endeavour…
Very nice! I hope you folks will go far. Best of luck :)
That said, the entire field is still so far from a seriously useful QC that I still wouldn’t bet there’s a secret one somewhere in some government lab. Those are my two cents, and I may be wrong of course.
I’m not claiming there is. There might be, but I find it unlikely. When the NSA develops practical QC systems, a lot of QC research will suddenly go quiet. That hasn’t happened.
There is a viable pathway to low error rate, scalable quantum computers on a less than 10 year time horizon though.
There is a long history of this technology, and the comparison to cold fusion is unwarranted. This is peer reviewed, accepted science. The basic technique was worked out under a DoE study in Texas, with an Australian collaborator. She (Dr. Michelle Simmons, who is widely respected in this field) then Went out and raised money to scale-up.
The basic idea is that they use scanning probe microscopes to create structures on a silicon surface with atomic precision, which can then be manipulated by the surrounding chip as a solid-state qubit. You still need error correction, but it ends up being a small constant factor rather than combinatorial blowup.
Full disclosure: I’m fundraising a startup to pursue a different manufacturing process that would enable the same type of quantum computer, but with nitrogen vacancies in diamond instead of silicon (and therefore still higher reliability).
One way or the other, highly reliable quantum computers are right around the corner and are going to take a lot of people by surprise.
This is also something that people outside academia apparently don't understand. Peer review doesn't tell you anything about the validity of the science. It only ensures the methodology was correct. The original Pons & Fleischmann paper passed peer review and was published in the Journal of Electroanalytical Chemistry. It only got retracted after other people tried and failed to reproduce it. If you want to know whether science is legit or not, look out for reproduction by independent actors - not peer review.
Indeed. Peer review is table stakes for the conversation, not an acceptance criteria for "true". Plenty of things get published that are generally regarded as wrong by those who work in the field.
There's journal peer review, and then there's scientific community peer review which involves acceptance of ideas and replication. They're not the same thing and unfortunately not often distinguished in speech or writing ("peer review" describes both). I thought that on HN it would be clear I was talking about the latter.
In this case, three separate labs have replicated this work. It's solid.
Peer review in fundamental science is almost universally understood straightforwardly as part of the process of publishing said science. The other kinds you are referring to (there's actually more than one) are more common in other fields. Peer review in physics is very far from acceptance in general.
Maybe, but that’s a very recent redefinition of terms. Peer review as a standardized mechanism in the 70’s - 90’s depending on the field. Until very close to the present saying “passing peer review” meant something akin to the Popperian notion of “ideas that survive attempts at falsification by the broader community of scientific peers.” In all my interaction with academia pre-pandemic, it meant precisely this. Something wasn’t peer reviewed because it was published (surviving the editorial process), but because it was published and cited by later works without credible refutations emerging.
> California-based startup PsiQuantum was given an “inside run” to a controversial $1 billion investment by Australian taxpayers as the only company that government engaged with in a thorough due diligence process.
According to the linked post there are PQ algorithms that will fit this niche:
> This variety of different trade-offs gives developers a lot of flexibility. For an embedded device where speed and bandwidth are important but ROM space is cheap, McEliece might be a great option for key establishment. For server farms where processor time is cheap but saving a few bytes of network activity on each connection can add up to real savings, NTRUSign might be a good option for signatures. Some algorithms even provide multiple parameter sets to address different needs: SPHINCS+ includes parameter sets for “fast” signatures and “small” signatures at the same security level.
Embedded/IoT is typically slow and small which is not a space PQ fits into.
I also think the article is overly optimistic claiming that ECC is “hard” because of the need for careful curve selection (even though we have very good established curves), but I find it hard to believe that PQ algorithms are immune to parameter selection problems and implementation challenges.
There has been research on the intersection of IoT and PQ signatures specifically at least, e.g. see "Short hash-based signatures for wireless sensor networks" [0] [1]. Unlike SPHINCS+ which is mentioned in the article, if you're happy to keep some state around to remember the last used signature (i.e. you're not concerned about accidental re-use) then the scheme can potentially be _much_ simpler.
The state is enormous. Dedicating megabytes and megabytes to key state is painful. And so is tracking state across components and through distribution channels. If you’re not afraid of that then just use symmetric crypto and be done with it.
To be clear my comment is specifically only relating to signature schemes, not encryption.
> The state is enormous
The scheme I linked to points towards efficient "pebbling" and "hash chain traversal" algorithms which minimize the local state required in quite a fascinating way (e.g. see https://www.win.tue.nl/~berry/pebbling/).
> tracking state across components and through distribution channels
Assuming you have reliable ordering in those channels I don't see how the stateful nature of such schemes makes it hugely more complex than the essential hard problem of key distribution.
Also, we are talking about mitigating a large tangible downside risk to a sudden breakthrough in the space - all the secrets stop being secret. "Reasonable" timeline estimates for how far away we are matter for thinks like if/how much we invest in the tech, but optimistic timelines become pessimistic when defending against downsides and we should be pessimistic when preparing regulations and mitigations
> … or was there a quantum computer somewhere and it was just kept hush hush, hence the push for PQ?
If there were a quantum computer somewhere, or close to one, it would be reasonably likely for it to be secret.
I look at the history of crypto in the mid to late 20th century for example. Small groups in the Allies and the NSA and etc. had certainly more knowledge than was public by a wide margin, years to decades.
That's not quite correct. The first (public) brute-forcing of DES was done in 1997 by the DESCHALL project distributing the search across tens of thousands of volunteer's computers for weeks [1]. The EFF then spent $250,000 to build a dedicated DES cracker ("Deep Crack") which required an average of four days per key found [2]
… or was there a quantum computer somewhere and it was just kept hush hush, hence the push for PQ?
The answer turns out to be: it doesn’t matter if there is a quantum computer! The set of PQ algorithms has many other beneficial properties besides quantum resistance.