Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You are mistaking consciousness with free will.


All of these terms are vague.

You are possibly mistaking free will with being an agent that has a (possibly deterministic) method of updating priors in response to new (unknown to the agent, and possibly deterministic) input.


I apologize for not being more clear. I find it very challenging to distinguish between details about LLP and "consciousness". My key point is that "human consciousness" is very different than ChatGPT. ChatGPT is statistically processing content already created and "appearing" to be conscious. Human beings have characteristics that ChatGPT does not share (sentience and context). We are often mistaking the what is needed to generate content (human consciousness) with what is capable of processing that content in very interesting ways (ChatGPT).

I do not believe that I am confusing free will and consciousness. See my comment above. Determinism versus free will is independent of knowledge available. Consider a paralyzed person incapable of any action. That person if the senses are all working still has awareness and context. A statistical engine only appears to. A LLM model is basing all actions on a complex matrices of thresholds. It is surprising and amazing how well that works. Given stimulus that takes advantage of those minute differences in thresholds, a wrong response will be returned. Human are not fooled in this way. Minute differences are typically missed or even skipped. Human beings can be fooled by optical illusions and by contradicting context (a statement like pick the "red" circle written in green ink and the person mistakenly picks the "green" circle. LLM models do not make these types of mistakes.


Sentience and knowledge is what I am talking about. Free will is what you do with that knowledge.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: