Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"But since the fundamental level of reality is based on non-algorithmic understanding, the universe cannot be, and could never be, a simulation."

The same may apply to "intelligence" --- aka AGI.

As far as I know, there is no proof that AGI can be produced or simulated by a binary logic algorithm running on a finite computer.

Hence, some people support the idea of "emergence" --- aka alchemy, aka PFM --- Pure Friggin Magic.





I'm pretty sure there is an existence proof that intelligence can be produced by quadrinary logic [1] running on a finite machine.

On the other hand, looking at the state of the world, some may have their doubts.

[1] (A,T,G,C) https://en.wikipedia.org/wiki/Genetic_code ; https://en.wikipedia.org/wiki/DNA_and_RNA_codon_tables#Stand...


Emergence and Reduction are not all that mysterious

Say you want to see what a car is made of. You can take it apart (reduce it) into parts on the workshop floor. Now you know what it's made of.

But you have to put it back together again before you have something you can start and drive away (emergent properties)

At no point does anything magical happen:

parts x organization_of_the_parts <-> the working car

reduction <- -> emergence


At no point does anything magical happen

Correct.

It is still just a collection of inanimate parts. At no point does it suddenly come to possess any properties that can not be explained as such.

Now, apply the same logic to a computer and explain how AGI will suddenly just "emerge" from a box of inanimate binary switches --- aka a "computer" as we know it.

Regardless of the number of binary switches, how fast they operate or how much power is consumed in it's operation, this inanimate box we call a "computer" will never be anything more than what it was designed to be --- a binary logic playback device.

Thinking otherwise is not based on logic or physics.


I think we might be using "emergence" differently, possibly due to different philosophical traditions muddying the waters.

I'm going to stick purely to a workable definition of emergence for now.

Also, let me try a purely empirical approach:

You said the car "never possesses any properties that can't be explained as a collection of parts." But consider: can that pile of parts on the workshop floor transport me over the Autobahn to Munich at 200 km/h?

We can try sitting on top of the pile while holding the loose steering wheel up in the air, making "vroom vroom, beep beep" noises, but I don't think we'll get very far.

On the other hand, once it's put (back) together, the assembled car most certainly can! That's a measurable, testable difference.

That (the ability of the assembled car to move and go places) is what I call an emergent property. Not because it's inexplicable or magical, but simply because it exists at one level of organization and not another. The capability is fully reducible to physics, yet it's not present in the pile.

parts × organization → new properties

That's all I mean by emergence. No magic, no strong metaphysical claims. Just the observation that organization matters and creates new causal powers.

Or, here's another way to see it: Compare Differentiation and Integration. When you differentiate a formula, you lose terms on the right hand side. Integration brings them back in the form of integration constants. No one considers integration constants to be magical. It was merely information that was lost when we differentiated.


You can achieve human level intelligence if you can simulate a human brain with sufficient fidelity

OK, now do that in reality, not just in theory.

86 billion neurons, 100 trillion connections, and each connection modulated by dozens of different neurotransmitters and action potential levels and uncounted timing sequences (and that's just what I remember off the top of my head from undergrad neuroscience courses decades ago).

It hasn't even been done for a single pair of neurons because all the variables are not yet understood. All the neural nets use only the most oversimplified version of what a neuron does — merely a binary fire/don't fire algo with training-adjusted weights.

Even assuming all the neurotransmitters, action potentials, and timing sequences, and internal biochemistry of each neuron type (and all the neuron-supporting cells) were understood and simulate-able, using all 250 billion GPUs shipped in 2024 [0] to each simulate a neuron and all its connections, neurotransmitters and timings, it'd take 344 years to accumulate 86 billion of them to simulate one brain.

Even if the average connection between neurons is one foot long, to simulate 100 trillion connections is 18 billion miles of wire. Even if the average connection is 0.3mm, that's 18 million miles of wire.

I'm not even going to bother back-of-the-envelope calculating the power to run all that.

The point is it is not even close to happening until we achieve many orders of magnitude greater computation density.

Will many useful things be achieved before that level of integration? Absolutely, just these oversimplified neural nets are producing useful things.

But just as we can conceptually imagine faster-than-light travel, imagining full-fidelity human brain simulation (which is not the same as good-enough-to-be-useful or good-enough-to-fool-many-people) is only maybe a bit closer to reality.

[0] https://www.tomshardware.com/tech-industry/more-than-251-mil...


“We’ve already agreed what you are, now we’re just haggling about the price.”

As with angels on the head of a pin, the interesting argument is whether the amount of compute is finite or not, not how finite it is.


Well, the amount of compute is certainly finite in this era. 250 million GPUs in a year is a big number, but clearly insufficient even for current demand from LLM companies, who are also buying up all available memory chips increasing general prices rapidly, so the current situation is definitely finite and even limited in very practical ways.

And, considering the visible universe is also finite, with finite amounts of matter and energy, it would follow ultimate compute quantity is also finite, unless there is an argument for compute without energy or matter, and/or unlimited compute being made available from outside the visible universe or our light cone. I don't know of any such valid arguments, but perhaps you can point to some?


This is exactly what some people (Musk for example) thought about the universe --- it could be a computer simulation.

These physicists say they have *mathematical* proof that this is not possible.


They have mathematical proof that we can not simulate our own universe.

We don't even know if this is true. See Peter Hacker's Mereological Fallacy

Do you have anything supporting this?

What level of granularity of fidelity are you referring to?


If something like Orch OR is correct then maybe not.

That's the theory yep

There's also no proof that intelligence can not be produced by an algorithm. Given the evidence so far, like LLMs seem to be able to beat average humans at most tests and exams it seems quite likely.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: