I find the panic over potential threat of quantum quite amusing when the machine is still extremely theoretical - all existing machines are slower than classical and it’s not even clear they can scale to the required number of qubits.
There’s nowhere near the same urgency and significantly more denial over global warming. A bit apples and oranges but climate models have a better track record, are wildly conservative (ie our present is much worse than the past climate models predicted) and it’s a real problem we know exists and is a bullet train headed our way.
I think this may be your technology bubble showing.
Global warming is talked about on the radio, TV, movies. It's sung about in songs. There's several conferences, widely-attended protests, stickers on appliances, tax initiatives, etc.
A few comments on obscure sites like HN can hardly be called a panic. It is silly to suggest that there is more urgency about post-quantum attacks on crypto than global warming.
I totally agree that climate change is a far more serious problem than quantum computing, but we do actually spend quite a bit on climate change, if not nearly as much as the problem warrants.
Measures to combat climate change are weird, and also politicised in weird ways.
Eg the US administration like to pretend that climate change is a top priority, but then complains when Chinese tax payers generously subsidise solar cells and electric cars for American consumers.
The environmental controls you are talking about mostly protect the local environment. Protecting the local environment is great, but crudely speaking, the global climate doesn't much care about whether you dump some toxic waste in some part of China.
What I hear in complaints about the exports from the Americans is that they are worried about good old jobs for good old American boys and girls (preferably in a union that is a strong voting block one way or another). So I would take the Americans at their word here.
Yet again, if the local environment in China was a priority of the administration that outweighed concern about climate change, you would see a very different policy. Instead of just putting tariffs and other restrictions on imports of some Chinese goods. You can probably come up with your own examples.
When global warming hits, the people who benefited from ignoring it will be best off. When quantum computers hit, the people benefiting the most right now will be the worst off as all their communications from this era are decrypted.
Should be able to do it by having the scanner take multiple samples. As long as you don’t need a valid login and the performance issue is still observable, you should be about to scan for it with minimal cost
Looks like the slowdown is actually at sshd process startup time, not authentication time. So it's back to being completely impossible to network-probe for.
It’s worse than that. Build.rs is in no way sandboxed which means you can inject all sorts of badness into downstream dependencies not to mention do things like steal crypto keys from developers. It’s really a sore spot for the Rust community (to be fair they’re not uniquely worse but that’s a fact poor standard to shoot for).
That’s not an effective idea for the same reason that lines of code is not a good measure of productivity. It’s an easy measure to automate but it’s purely performative as it doesn’t score the qualitative value of any of the maintenance work. At best it encourages you to use only popular projects which is its own danger (software monoculture is cheaper to attack) without actually resolving the danger - this attack is reasonably sophisticated and underhanded that could be slipped through almost any code review.
One real issue is that xz’s build system is so complicated that it’s possible to slip things in which is an indication that the traditional autoconf Linux build mechanism needs to be retired and banned from distros.
But even that’s not enough because an attack only needs to succeed once. The advice to minimize your dependencies is an impractical one in a lot of cases (clearly) and not in your full control as you may acquire a surprising dependency due to transitiveness. And updating your dependencies is a best practice which in this case actually introduces the problem.
We need to focus on real ways to improve the supply chain. eg having repeatable idempotent builds with signed chain of trusts that are backed by real identities that can be prosecuted and burned. For example, it would be pretty effective counter incentive for talent if we could permanently ban this person from ever working on lots of projects. That’s typically how humans deal with members of a community who misbehave and we don’t have a good digital equivalent for software development. Of course that’s also dangerous as blackball environments tend to become weaponized.
> We need to focus on real ways to improve the supply chain. eg having repeatable idempotent builds with signed chain of trusts that are backed by real identities that can be prosecuted and burned.
So, either no open source development because nobody will vouch to that degree for others, or absolutely no anonymity and you'll have to worry about anything you provide because of you screw up and introduce a RCE all of a sudden you'll have a bunch of people and companies looking to say it was on purpose so they don't have to own up to any of their own poor practices that allowed it to actually be executed on?
You don’t need vouching for anyone. mDL is going to be a mechanism to have a government authority vouch your identity. Of course a state actor like this can forge the identity, but that forgery at least will give a starting point for the investigation to try to figure out who this individual is. There’s other technical questions about how you verify that the identity really is tied in some real way to the user at the other end (eg not a stolen identity) but there are things coming down that will help with that (ie authenticated chains of trust for hw that can attest the identity was signed on the given key in person and you require that attestation).
As for people accusing you of an intentional RCE, that may be a hypothetical scenario but I doubt it’s very real. Most people have a very long history of good contributions and therefore have built up a reputation that would be compared against the reality on the ground. No one is accusing Lasse Collin of participating in this even though arguably it could have been him all along for what anyone knows.
It doesn’t need to be perfect but directionally it probably helps more than it hurts.
All that being said, this clearly seems like a state actor which changes the calculus for any attempts like this since the funding and power is completely different than what most people have access to and likely we don’t have any really good countermeasures here beyond making it harder for obfuscated code to make it into repositories.
Don’t know all the details and rust isn’t immune to a build attack, but stuff like that tends to stand out a lot more I think in a build.rs than it would in some m4 automake soup.
It is done, but for this case the problem is partial compilation. For this you'd need methods to be tagged as pure, but that assumption needs to propagate and it could be violated by a library being upgraded.
Security and utility are in opposing balances often. The safest possible computer is one buried far underground without any cables in a faraday cage. Not very useful.
> We’re not inserting the security community into that loop and slowing things down just so people can download random programs onto their computers and run them at random. That’s just a stupid thing to do, there’s no way to make it safe, and there never will be.
Setting aside JavaScript, you can see this today with cloud computers which have largely displaced private clouds. These run untrusted code on shared computers. Fundamentally that’s what they’re doing because that’s what you need for economies of scale, durability, availability, etc. So figuring out a way to run untrusted code on another machine safely is fundamentally a desirable goal. That’s why people are trying to do homomorphic encryption - so that the “safely” part can go both ways and both the HW owner and the “untrusted” SW don’t need to trust each other to execute said code.
That doesn’t make sense unless you have only 1 or 2 physical CPUs with contention. In a modern CPU the latter should be faster and I’m left unsatisfied by the correctness of the explanation. Am I just being thick or is there a more plausible explanation?
The latter is faster in actual CPU time, however note that TFA the measurement only starts with the program, it does not start with the start of the pipeline.
Because the compilation time overlaps with the pipes filling up, blocking on the pipe is mostly excluded from the measurement in the former case (by the time the program starts there’s enough data in the pipe that the program can slurp a bunch of it, especially reading it byte by byte), but included in the latter.
My hunch is that if you added the buffered reader and kept the original xxd in the pipe you’d see similar timings.
The amount of input data is just laughably small here to result in a huge timing discrepancy.
I wonder if there’s an added element where the constant syscalls are reading on a contended mutex and that contention disappears if you delay the start of the program.
The idea that the overlap of execution here by itself plays a role is nonsensical. The overlap of execution + reading a byte at a time causing kernel mutex contention seems like a more plausible explanation although I would expect someone better knowledgeable (& more motivated) about capturing kernel perf measurements to confirm. If this is the explanation, I'm kind of surprised that there isn't a lock-free path for pipes in the kernel.
Based on what you've shared, the second version can start reading instantly because "INFILE" was populated in the previous test. Did you clear it between tests?
Here are the benchmarks before and after fixing the benchmarking code:
On my machine, they're all equally fast at ~28 us. Clearly the changes only had an impact on machines with a different configuration (kernel version or kernel config or xxd version or hw).
One hypothesis outlined above is that the when you pipeline all 3 applications, the single byte reader version is doing back-to-back syscalls and that's causing contention between your code and xxd on a kernel mutex leading to things going to sleep extra long.
It's not a strong hypothesis though just because of how little data there is and the fact that it doesn't repro on my machine. To get a real explanation, I think you have to actually do some profile measurements on a machine that can repro and dig in to obtain a satisfiable explanation of what exactly is causing the problem.
It depends on where the timing code is. If the timer starts after all the data has already been loaded, the time recorded will be lower (even if the total time for the whole process is higher).
I’m not following how that would result in a 10x discrepancy. The amount of data we’re talking about here is laughably small (it’s like 32 bytes or something)
I’ll admit to not having looked at the details at all, but a possible explanation is that almost all the time is spent on inter process communication overhead, so if that also happens before the timer starts (eg, the data has been transferred, just waiting to be read from a local buffer) then the measured time will be significantly lower.
I know you’re joking, but it wouldn’t really be that hard in terms of a lookup for played positions which is what OP asked for - you could have Bitboards for the relevant info so finding a state would be a refined search starting with the busy/occupied Bitboard. Since you’re only storing games that have been played rather than all positions, the size of the index shouldn’t be outrageously large compared with other things already being stored.
There’s nowhere near the same urgency and significantly more denial over global warming. A bit apples and oranges but climate models have a better track record, are wildly conservative (ie our present is much worse than the past climate models predicted) and it’s a real problem we know exists and is a bullet train headed our way.