Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How is knowing what word is most likely to come next in a series of words remotely the same as having "the concept of truth and facts"?




how would you prove that a human has it?

Humans update their model of the world as they receive new information.

LLMs have static weights, therefore they cannot not have a concept of truth. If the world changes, they insist on the information that was in their training data. There is nothing that forces an LLM to follow reality.


what about a person with short term memory?

Whataboutism is almost never a compelling argument, and this case is no exception.

ETA:

To elaborate a bit: based on your response, it seems like you don't think my question is a valid one.

If you don't think it's a valid question, I'm curious to know why not.

If you do think it's a valid question, I'm curious to know your answer.


its not whataboutism, i'm simply asking how you would perform the same test for a human. then we can see if it applies or not to chatgpt?

I don't know. What is your answer to my question?

Knowing which word is likely to come after the other is trivially the concept of knowing truth for me.

Why not? We have optimised for truth and we are predicting the best words that ensure this optimal value.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: