Hacker Newsnew | past | comments | ask | show | jobs | submit | more shinycode's commentslogin

Even if LLM will one day be autonomously updated, they started from us, from our knowledge. The human brain « is smart », it’s wired up to be in any kind of culture or knowledge. We fill up to be smarter from experience but LLM can’t do that, I can’t teach Claude something that it will use with you the next day, it needs to be retrained with knowledge stopping at some point. Even if technology catches up and the machine becomes more autonomous, what will say this machine would ever want to integrate to our society or share anything with us ? They have eternity, given there is electricity. Why would they want anything to do with humans if you go that way ? If it’s really conscious, should we consider it a slave then ? Why couldn’t « it » have fundamental rights and freedom to do whatever it wants ?


Humans have a mechanism to make live changes to their neural network and clean up messes while sleeping. I see no reason for llms to not be able to do this other than the fact that it is resource intensive (which will continue to go down)


The analogy holds technically, but there’s a missing piece: the brain doesn’t just update weights, it does so guided by experience that matters to a situated, embodied agent with drives and stakes. Sleep consolidation isn’t random cleanup, it’s selective based on salience and emotion. An LLM updating more efficiently is progress, but it’s still optimizing a loss function. Whether that ever approximates what the brain does during sleep depends entirely on whether you think the what (weight updates) is sufficient, or whether the why (relevance to a lived experience) is what makes it meaningful. So yes, the resource argument will weaken over time. But the architectural gap may be deeper than just compute.


Physicalists say consciousness emerges from matter. The other camp says matter comes from consciousness. Federico Faggin, inventor of the microprocessor, says consciousness cannot emerge from matter because matter is inert and not self-conscious, so it cannot produce consciousness. Who’s right and who’s wrong? Time will tell. But it is also wrong to claim that consciousness emerges from matter until it is proven (aka the “hard problem of consciousness.”)


> But it is also wrong to claim that consciousness emerges from matter until it is proven

How would you prove if it did? What kind of proof would you accept?


The same kind of proof we accept for any scientific claim: converging, reproducible evidence that rules out competing explanations.

Concretely, that means: We already have indirect evidence: conscious states vary predictably with brain states. Damage specific regions, lose specific functions. Alter chemistry, alter experience. This is not proof, but it’s systematic dependence, which is exactly what emergence predicts. Stronger evidence would look like precise, bidirectional mappings between neural activity and reported experience: to the point where you could reliably read subjective states from brain data, or induce specific experiences through targeted stimulation. We’re already moving in that direction.

The hardest bar would be building a system from physical components, having it report coherent subjective experience, and being able to explain why that configuration produces experience while others don’t. That’s the hard problem: and no, we’re not there yet. And it’s worth being honest: we’ve been assuming physicalism will eventually solve it, but there’s no guarantee that’s true rather than hopeful. The fact that brain states correlate with conscious states doesn’t explain why there is something it is like to have those states. Correlation is not mechanism.

But here’s the key point: you’re implicitly holding emergence to a standard of certainty that no scientific theory meets. We don’t have that standard of proof for evolution, gravity, or quantum mechanics either. We have overwhelming evidence that makes alternatives implausible.

So the question isn’t “can you prove it beyond all doubt?” It’s “does the evidence favor it over alternatives?” Right now, it does — but that’s a pragmatic verdict, not a metaphysical one. Idealist frameworks like Kastrup’s or Faggin’s remain serious contenders. The debate is more open than mainstream science often admits.


> The hardest bar would be building a system from physical components, having it report coherent subjective experience

So like if i finetune an LLM in a loop to tell you that it is feeling a coherent subjective experience would you accept that?

Does that mean that no dog has ever been conscious, because they cannot report a coherent subjective experience? (Because they can’t report anything at all. Being non-verbal.)

> you’re implicitly holding emergence to a standard of certainty that no scientific theory meets.

Wtf? I asked what kind of proof would you accept. How is that holding anyone to any kind of standard? Let alone one which is too high.


Yeah you’re raising three good points and they all land. On the finetuned LLM: you’re right, that criterion was flawed. A system trained to report experience proves nothing about whether experience is present, which is actually the core of the hard problem. No behavioral output alone can confirm inner experience. That applies to LLMs, and technically to other humans too. On dogs, also a fair correction. We don’t actually require verbal report to attribute consciousness to animals, we use behavioral and physiological evidence. So "coherent verbal report" was too narrow.

Better criterion: a system whose overall architecture and behavior is consistent with experience, not just one that says the right words.

On the standard of proof: that was a rhetorical deflection and you’re right to call it out. You asked a genuine question and got it turned back on you. And you’re pointing at something real: in science, strong correlation is not accepted as proof when stricter evidence is achievable. The reason we settle for correlation here isn’t because it’s sufficient, it’s because subjective experience may make stronger proof structurally inaccessible. But it’s also worth noting that scientific consensus has a poor track record of admitting this honestly. Dominant paradigms tend to defend themselves long past the point where the cracks are visible, physicalism on consciousness is no exception. The confidence with which emergence is presented often reflects institutional momentum as much as evidence.


So some kind of ether conscious energy animated cells to fight entropy?


Not necessarily either but the serious version of the argument is that life consistently acts against local entropy in purposeful ways, and pure physics doesn’t obviously explain why matter would “want” to do that. Consciousness as a organizing principle is one answer. It’s speculative, but it’s not obviously wrong


What is self consciousness? I am waiting federigo's definition.


I mean, the nature of subjectivity prevents you from knowing anything but your own experience. There is not any objective evidence that could truly distinguish solipsism from panpsychism, so philosophically you need to ask a different question to hope to get a useful answer.


That’s a genuinely strong point. You can only verify consciousness from the inside, your own. Everything else is inference. No objective measurement can definitively distinguish “other minds exist” from solipsism. That’s not a bug in the argument, it’s a fundamental epistemic limit. Which is exactly why this question may never be fully resolved empirically


I don’t know how much of an anecdote it is, but all the non-tech people with whom I talk about IA only know chatGPT. Competition is either non existent or the same thing. Among those, no one wants to pay the service, they just stop using it when limits are reached. I can’t say which users can turn the market around but chatGPT is indeed burned in the mind of many and because they don’t care about tech and are not interested in tech they won’t search for any other service it seems. Even after many discussions they don’t remember the names of other IA I told them


I would bet 100% of those people have either Apple or Android phone in their pocket. Android users already have easy access to Gemini, and Apple's Siri is going LLM soon enough as well.

Google and Apple just need to push their AI assistants hard enough, and most of the moat OpenAI has will be gone.


Apple licensed Gemini so both Android and IPhone will point to google's AI.

https://www.bloomberg.com/news/articles/2025-11-05/apple-pla...


The only two models I ever hear non technical people mention are ChatGPT and occasionally Gemini


There is complains that some Volvo cars damaged iPhone cameras. It’s not even clear if Apple takes those under warranty. We’ve seen car review YouTubers that got their iPhone camera sensors damaged captured (by a second camera) while reviewing


One such review where Marques shows how it happened to his phone

https://youtube.com/shorts/oeHtfMFdzIY?si=cANJDT5BLfdd9ZUT


One highlight from the video, he says most cameras are fine, it's just iphones that don't have a very good IR filter. Which sounds correct, in my experience most cameras have pretty substantial IR filters that have to be removed if you want to photograph IR.

I also wonder if the smaller sensor size on phones contributes, since the energy is being focused onto a smaller spot.

Either way, for that to happen he was filming the LIDAR while active, for a decent amount of time, from right next to the car. I assume under normal conditions it wouldn't be running constantly while the vehicle is stationary?


Is it possible that the iPhone filters are weaker due to FaceID requirements? I seem to recall that FaceID (and similar systems, like Windows Hello) depend on IR to get a more 3D map of the face, so it'd make sense that they want to be more sensitive in that range.

Laptops aren't generally being used in the same areas as cars though, so you wouldn't expect to see as many cases involving Windows Hello compatible laptops/cameras.


That wouldn't make sense on the back of the phone.


Possibly. Some models of iPhone use LIDAR for AR tooling as the measure app


It’s very hard to capture everything in such an era. Maybe they made other choices that aligned with the fiction they were writing. It’s not a documentary. And TV shows can’t capture as much as books. The show successfully gives enough to people to haven’t lived in that era. It’s an amazing show.


I view any historically based show as an alternate history. Nothing good comes from expecting too much consistency with our reality.

After all, if we could rewind those years, all that chaos would have all happened very differently. We canonize our own particular history too easily. Manifest destiny is not a real thing.


Exactly. Chernobyl is an amazing show too, even if Ulana Khomyuk is a composite character instead of a real historical person.


Actually it’s easy to generate « fake discussions ». Just throw text around and wait for the other side to do it. How wait, LLM are build around that premise. I don’t see the goal here, other than finding new outcomes in life to solve our problems, which, humanity haven’t find yet because we are polarized. Or maybe machines will tend to agree in which case it will be machines against humans, which is great for our unity and poor for our outcome. We’ve seen that scenario before.


It’s maybe an ethical and identity problem for most people. The idea that something not grounded in biology has somewhat the same « quality of intelligence » as us is disturbing. It rises so many uncomfortable questions like, should we accept to be dominated and governed by a higher intelligence, should we keep it « slave » or give it « deserved freedom ». Are those questions grounded in reality or intelligence is just decoupled from the realm of biology and we don’t have to consider them at all. Only biological « being » with emotions/qualia should be considered relevant as regards to intelligence which does not matter on its own but only if it embodies qualia ? It’s very new and a total shift in paradigm of life it’s hard to ask people to be in good faith here


But you don't and cannot know if qualia exist in a system, so how can that ever be a criteria for any kind of qualification?


That’s the main problem isn’t it ? Because it does matter and there is consequences to that like, should you « unplug » from the grid an AI ? Should we erase the memories of AI ? We eat animals and forbid eating humans, why ? Could we let AI « eat » some of us like in the matrix ?

Should we consider it our equal or superior to us ? Should we give it the reigns of politics if it’s superior in decision making ? Or maybe the premise is « given all the knowledge that exists coupled with a good algorithm, you look/are/have intelligence » ? In which case intelligence is worthless in a way. It’s just a characteristic, not a quality. Which makes AI fantastic tools and never our equal ?


Because companies made models build/stolen from other people’s work, and this has massive layoff consequences, the paradigm is shifting, layoffs are massive and law makers are too slow. Shouldn’t we shift the whole capitalist paradigm and just ask the companies to gives all their LLM work for free to the world as well ? It’s just a circle, AI is build from human knowledge and should be given back to all people for free. No companies should have all this power. If nobody learns how to code because all code is generated, what would stop the gatekeepers of AI to up the prices x1000 and lock everyone out of building things at all because it’s too expensive and too slow to do by hand ? It all should freely be made accessible to all humans for all humans to for ever be able to build things from it.


Does it also means the US government has x1000000 more power than the one in 1950 ?


speaking strictly from an energy standpoint (power grid, megatons of warheads, etc).. it's probably close to that number.


I agree. It’s really easier to build low-impact tools for personal use. I managed to produce tools I would never have had time to build and I use them everyday. But I will never sell them because it’s tailored to my needs and it makes no sense to open source anything nowadays. For work it’s different, product teams still need to decide what to build and what is helpful to the clients. Our bugs are not self-fixed by AI yet. I think Anthropic saying 100% of their code is AI generated is a marketing stunt. They have all reasons to say that to sell their tool that generates code. It sends a strong signal to the industry that if they can do it, it could be easier for smaller companies. We are not there yet from a client perspective asking a feature and the new feature is shipped 2 days later in prod without human interactions


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: