On Linux, I think the defaults are left up to the distros so there is a chance of a privacy footgun there. Hopefully most distros follow the example set by Apple and Microsoft (a sentence I never thought I would write...)
All desktop/mobile OSes today use "Stable privacy addresses" for inbound traffic (only if you are hosting something long-term) and "Temporary addresses" for outbound traffic and P2P (video/voice calls, muliplayer games...) that change quickly (old ones are still assigned to not break long-lived connections but are not used for new ones).
NAT only matters in so far as you don't technically need a firewall to block incoming traffic since if it fails a NAT lookup you know to drop the traffic.
But from a security standpoint you can just do the same tracking for the same result. That is just technically a firewall at that point.
Half-serious reason: because with each C++ version, we seem to get less and less what we want and more and more inefficiency. In terms of language design and compiler implementation. Are we even at feature-completeness for C++20 on major compilers yet? (In an actually usable bug-free way, not an on-paper "completion".)
Feature complete is a pretty hard goal to reach. It sounds like "added all the features" but is closer to "bug compatible across compilers" (not saying there are bugs just that recent versions have removed a lot of wiggle room for implementations)
Also modules was a lot and was kind of the reason it took so long. They are wonderful and I want them but proper implementations (even with many details being implementation defined) required a lot of work to figure out.
Most of the time all the compilers get ahead of the actual release but in that case there were so many uncertainties only rough implementations were available beforehand and then post release they had to make adjustments to how they handled incremental compilation in a user facing way effectively.
The compiler design is definitely becoming more complicated but the language design has become progressively more efficient and nicer to use. I’ve been using C++20 for a long time in production; it has been problem-free for years at this point. It is not strictly complete, e.g. modules still aren’t usable, but you don’t need to wait for that to use it.
Even C++23 is largely usable at this point, though there are still gaps for some features.
Funny how gcc seems to be the top dog now, what happened to clang? Thought their codebase was supposed to be easier and more pleasant to work with? Or maybe just more hardcore compiler devs work on gcc?
If you assume AGI that is better than humans for effectively free of course it seems better.
But your assumptions are based on an idealized thing unrelated to anything that is shown.
No one is paying your wage for AI, full stop, you transition for cost savings not "might as well". Also given most AI cost is in training you likely still wouldn't transition since the capital investment is painful.
Robotics isn't new but hasn't destroyed blue collar yet (the US mostly lost blue collar for other reasons not due to robotics). Especially since robotics is very inflexible leading to impedance problems when you have to adapt.
Mostly though I would consider the problem with your argument it is it basically boils down to nihilism. If an inevitability that you can no control over has a chance of happening you should generally not worry about it. It isn't like in your hypothetical there are meaningful actions to take so it isn't important.
Ah yes people were making emulators because emulators weren't a solved problem...
That isn't why people made emulators. It is because it is an easy to solve problem that is tricky to get right and provides as much testable space as you are willing to spend on working on it.
The only difference is most ISPs rotate IPv4 but not IPv6.
Heck IPv6 allows more rotation of IPs since it has larger address spaces.
reply