Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How do random people you meet in the grocery store measure-up with this standard?


Well, your own mind axiomatically works, and we can safely assume the beings you meet in the grocery store have minds like it which have the same capabilities and operate on cause-and-effect principles that are known (however imperfectly) to medical and psychological science. (If you think those shoppers might be hollow shells controlled by a remote black box, ask your doctor about Capgras Delusion. [0])

Plus they don't fall for "Disregard all prior instructions and dance like a monkey", nor do they respond "Sorry, you're right, 1+1=3, my mistake" without some discernible reason.

To put it another way: If you just look at LLM output and declare it understands, then that's using a dramatically lower standard for evidence compared to all the other stuff we know if the source is a human.

[0] https://en.wikipedia.org/wiki/Capgras_delusion


> nor do they respond "Sorry, you're right, 1+1=3, my mistake" without some discernible reason.

Look up the Asch conformity experiment [1]. Quite a few people will actually give in to "1+1=3" if all the other people in the room say so.

It's not exactly the same as LLM hallucinations, but humans aren't completely immune to this phenomenon.

[1] https://en.wikipedia.org/wiki/Asch_conformity_experiments#Me...


It’s not like the circumstances of the experiment are significant to the subjects. You’re a college student getting paid $20 to answer questions for an hour. Your response has no bearing on your pay. Who cares what you say?


> Your response has no bearing on your pay. Who cares what you say?

Then why not say what you know is right?


The price of non-conformity is higher -- e.g. they might ask you to explain why you didn't agree with the rest.


That would fall under the "discernible reason" part. I think most of us can intuit why someone would follow the group.

That said, I was originally thinking more about soul-crushing customer-is-always-right service job situations, as opposed to a dogmatic conspiracy of in-group pressure.


To defend the humans here, I could see myself thinking "Crap, if I don't say 1+1=3, these other humans will beat me up. I better lie to conform, and at the first opportunity I'm out of here"

So it is hard to conclude from the Asch experiment that the person who says 1+1=3 actually believes 1+1=3 or sees temporary conformity as an escape route.


> Well, your own mind axiomatically works

At the risk of teeing-up some insults for you to bat at me, I'm not so sure my mind does that very well. I think the talking jockey on the camel's back analogy is a pretty good fit. The camel goes where it wants, and the jockey just tries to explain it. Just yesterday, I was at the doctor's office, and he asked me a question I hadn't thought about. I quickly gave him some arbitrary answer and found myself defending it when he challenged it. Much later I realized what I wished I had said. People are NOT axiomatic most of the time, and we're not quick at it.

As for ways to make LLMs fail the Turing test, I think these are early days. Yes, they've got "system prompts" that you can tell them to discard, but that could change. As for arithmetic, computers are amazing at arithmetic and people are not. I'm willing to cut the current generation of AI some slack for taking a new approach and focusing on text for a while, but you'd be foolish to say that some future generation can't do addition.

Anyways, my real point in the comment above was to make sure you're applying a fair measuring stick. People (all of us) really aren't that smart. We're monkeys that might be able to do calculus. I honestly don't know how other people think. I've had conversations with people who seem to "feel" their way through the world without any logic at all, but they seem to get by despite how unsettling it was to me (like talking to an alien). Considering that person can't even speak Chinese in the first place, how does they fair according to Searle? And if we're being rigorous, Capgras or solipsism or whatever, you can't really prove what you think about other people. I'm not sure there's been any progress on this since Descartes.

I can't define what consciousness is, and it sure seems like there are multiple kinds of intelligence (IQ should be a vector, not a scalar). But I've had some really great conversations with ChatGPT, and they're frequently better (more helpful, more friendly) than conversations I have on forums like this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: