Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The issue isn't that the model isn't truthful, it's that it is effective at writing language that appears factual and looks truthful to the untrained eye. Sure, it is going to give you what you're asking for, but the issue come when you take that and give it without warnings as to its origins to people who can't be expected to fact-check a scientific article.


You don't need AI to fool people who can't understand a scientific article yet will trust its conclusions.


True, but AI significantly increases the throughput and ease at which it can be done.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: