Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> You can just as well think of everything a model outputs as a hallucination

Exactly. Don't forget that an important factor in the success of GPT3 was RLHF, which is essentially training the model to produce "hallucinations" that are more acceptable on average to human trainers.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: