Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I like that fact the ChatGPT sometimes gives wrong answers. So do humans. Makes it human-like.


Now, if you can explain to the AI why it is wrong, and if it could learn from that, it would be wild and even more human-like!


I was quite impressed by its capability to correct itself.

My test went like this:

Q: Do not use the imperial system

Q: [some question which involves distances]

A: blah blah 1500 kilometers, which is 1000 miles blah blah

Q: I told you not to use the imperial system

It apologized, and repeated its previous answer, correctly omitting the miles data.

If you asked me to write a program that does that (without using ML), I'd have no idea where to start.


The US uses miles, they’re not just imperial. Maybe you needed to tell it not to use imperial units or US units.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: