Maybe I am just out of my depth, but I don't understand what problem quantum Darwinism is solving. The Schrödinger equation already explains why observers seem to agree: the ones that don't are separated from each other.
This article is making some pilot-wave-like claim on top of quantum Darwinism that while the Schrödinger equation is real, all the 'real realness' exists in some pointer to a specific location inside it. Why does it do this? Where does this claim come from? At least collapse theories allow that the thing the Schrödinger equation is modelling is actually real up until the part God gets out his frustum culler.
I think the claim is this: the wave function never collapses. However, the effect of the wave function on the environment quickly converges to only one of the two states. We could not know the difference because we cannot directly observe the wave function. We only can see the result as it is magnified onto a macro scale by our observation equipment (or, lacking that, our eyes, which themselves turn a tiny microscopic phenomenon into macro signals). Once that particular outcome has been 'selected' for, the probability of the other outcome quickly becomes vanishingly small very fast. Thus, all future outcomes are that outcome, even though the underlying reality is still that fully entangled state.
Photons (and other objects that seem to behave 'quantumly') do not seem subject to this (and thus we can use them to understand quantum behavior) because they have particular properties wherein their behavior is not as affected by these macroscopic drop-offs quite as badly.
My confusion is that this is just Many Worlds / the Schrödinger equation, and Quantum Darwinism doesn't seem to add anything that wasn't already obvious by inspection. But after reading more, I think that's kind of the point? It's ultimately just an argument for why the Schrödinger equation produces these locally classical regions, plus a bunch of overly flowery prose and dressing up in invented jargon that can mostly be ignored. I think the article failed to ignore that second part and ended up confused.
Many worlds is just the claim that the Schrödinger equation holds in actuality.
I don't think QD makes decisions 'uniquely'. Take this quote,
> The step from the epistemic (“I have evidence of |π17〉”.) to ontic (“The system is in the state |π17〉”.) is then an extrapolation justified by the nature of ρSℰ: Observers who detected evidence consistent with |π17〉 will continue to detect data consistent with |π17〉 when they intercept additional fragments of ℰ. So, while the other branches may be in principle present, observers will perceive only data consistent with the branch to which they got attached by the very first measurement. Other observers that have independently “looked at” S will agree.
Emphasis on "the other branches may be in principle present" — the claim at least in this paper can't be that all branches agree, just that they agree locally.
Without defining what 'actuality' is, then there's no meaning to 'the Schrodinger equation holds in actuality'. In their own way, all interpretations of quantum mechanics claim the Schrodinger equation holds in 'actuality'. Some view probability and potential as a claim on 'actuality'. Others dismiss this and instead view probability skeptically and claim it must thus be true. This is an ontological argument, not a scientific one.
If you don't like the word 'actuality', I can rephrase. Many worlds is just the claim that physical reality materially evolves in correspondence with the Schrödinger equation.
If you want to quibble over what it means for something to be material, go ahead, but unless you can tie it to some specific claim being made about QD I don't really know what the exercise gets you.
This is missing the primary reasons insider trading is bad, which are that it's an information theft incentive against employers, and worse, that it's a sabotage incentive.
Yes a strange comment. Opus 4.5 is significantly better than before and Opus 4.6 is even better. Same with the 5.2 and 5.3 Codex models.
If anything, the pace has increased.
This may be one of the most important graphs to keep an eye on: https://metr.org/ and it tracks well to my anecdotal experience.
You can see the industry did hit a bit of a wall in 2024 where the improvements drop below the log trend. However, in 2025 the industry is significantly _above_ the trend line.
Are you seeing any meaningful improvements to anything you use though? Like have self-driving cars become really cheap and common place? Medicine improved? Is Netflix giving us an abundance of cheap, really good content to watch? How is your AI doctor?
The geeks are telling us the LLMs are great, but that's about it.
I'm seeing way more AI generated youtube thumbnails...I know you will say "give it time" but I'm pretty convinced the problems AI solves are not the hard problems required to boost an economy.
I see these claims in a lot of anti-LLM content, but I’m equally puzzled. The pace of progress feels very fast right now.
There is some desire to downplay or dismiss it all, as if the naysayers are going to get their “told you so” moment and it’s just around the corner. Yet the goalposts for that moment just keep moving with each new release.
It’s sad that this has turned into a culture war where you’re supposed to pick a side and then blind yourself to any evidence that doesn’t support your chosen side. The vibecoding maximalists do the same thing on the other side of this war, but it’s getting old on both sides.
Yeah, I feel that too. It'd be great if people acknowledged the progress without turning it into polarized movements and numerous discussions about how we all lag behind...
I mean if you take now, from a year ago, vs a year ago from two years ago and then once more vs two years ago to three years ago, you wouldn't see the idea of a plateau in effectiveness or not?
I still have several projects I developed in mid 2024 where I felt the AI was really close but not quite good enough for production, and almost two years in they haven't gotten appreciably better to where I would be able to release an actual application.
Have you been around a Waymo as a pedestrian? Used one recently? I have never felt as safe around any car as I do around Waymos.
It can feel principled to take the critical stance, but ultimately the authorities are going to have complete video of the event, and penalizing Waymo over this out of proportion to the harm done is just going to make the streets less safe. A 6mph crash is best avoided, but it's a scrap, it's one child running into another and knocking them over, it's not _face jail time_.
> the companies owning and operating those AIs would go out of business as no one would be able to afford the products made by the AIs
What do you think money is...?
Money is a way to indirectly trade
labour and goods. If a job is automated, that labour doesn't disappear into the aether, it's still in the tradable pot of total goods and services. You cannot empty a pot by filling it. A world where a company though automation has made there nobody else to productively sell to is a world where _by definition_ they own all the output that they could otherwise have traded for.
> The volume of space from the ground to 50,000 feet is about 200x smaller than the volume from the Karman line to the top of LEO alone (~2,000 km).
Volume is the natural way to assume space scales, but it's incorrect. Two planes can fly parallel, side by side. Two satellites cannot orbit side by side.
In the limit, if Earth had a solid ring of infinitesimal width, it would take zero volume but all orbits.
The paper says we are 2.8 days away from a collision. It doesn't say we're '2 days away from kessler'. In fact, the paper explicitly warns against your interpretation.
> We emphasize that the CRASH Clock does not measure the onset of KCPS, nor should it be interpreted as indicating a runaway condition.
The stock market learns from experience, because it's made of people who learn from experience.
Imagine an investor's experience with TSLA. From the beginning, they're flooded with news reports about 'fundamentals' this, 'fundamentals' that, about how Tesla would imminently collapse, how it's a scam, yada yada. Said investors _constantly_ see themselves being right and those skeptics wrong. Tesla is in fact disrupting an industry. They really are just continuing to scale. Marginal profitability keeps going up. Their cars keep getting better. FSD keeps getting better. The competition that people kept pointing at kept failing to materialize. None of this seems to change the skeptics' byline.
Tesla is actually in a materially worse position than it was a few years ago, by many metrics, but the stock price isn't set by 'fundamentals', it's set by the people setting demand for the stock. With TSLA, this is disproportionately going to be people who have learned to and gotten rich from ignoring the people loudly telling them why investing in Tesla is a bad idea.
A market will correct eventually, but corrections either require people to change their minds or run out of capital. Neither has happened yet, so the market can't correct.
Indeed, Bayesian approaches need effort to correct bad priors, and indeed the original hypothesis was bad.
That said. First, in defense of the prior, it is infinitely more likely that the probability is exactly 0.5 than it is some individual uniformly chosen number to each side. There are causal mechanisms that can explain exactly even splits. I agree that it's much safer to use simpler priors that can at least approximate any precise simple prior, and will learn any 'close enough' match, but some privileged probability on 0.5 is not crazy, and can even be nice as a reference to help you check the power of your data.
One really should separate out the update part of Bayes from the prior part of Bayes. The data fits differently under a lot of hypotheses. Like, it's good to check expected log odds against actual log odds, but Bayes updates are almost never going to tell you that a hypothesis is "true", because whether your log loss is good is relative to the baselines you're comparing it against. Someone might come up with a prior on the basis that particular ratios are evolutionarily selected for. Someone might come up with a model that predicts births sequentially using a genomics-over-time model and get a loss far better than any of the independent random variable hypotheses. The important part is the log-odds of hypotheses under observations, not the posterior.
This article is making some pilot-wave-like claim on top of quantum Darwinism that while the Schrödinger equation is real, all the 'real realness' exists in some pointer to a specific location inside it. Why does it do this? Where does this claim come from? At least collapse theories allow that the thing the Schrödinger equation is modelling is actually real up until the part God gets out his frustum culler.