Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This would work on people too, you can see daily fake info/text/videos and many people believing in them.

LLMs do not think, why this is still hard to understand? They just spit out whatever data they analyse and trained on.

I feel this kind of articles is aimed at people who hate AI and just want to be conformable within their own bias.



The journals the scientist submitted had a fake university, explicitly fake people, references to the simpsons and star trek, etc

Most doctors would not believe that, and would also consider any new eye disease they’d never see in real life with scepticism


LLMs will need to develop a notion of trustworthiness. Interesting that part of the process of learning isn’t just learning, but also learning what to learn and how much value to put into data that crosses your path.


To me I think the problem is the blast radius

All of us are slightly wrong about things, but not all of us are treated as oracles of correct information like Opus, ChatGPT, etc are


you're confusing LLMs with humans


Not massively sure I am


Journals? The article says the article was uploaded to 2 preprint servers.


Sorry, even worse then

I got confused because a journal referenced them > The experiment’s reach has now spread into the published medical literature. The bixonimania research has been cited by a handful of researchers, including a study that appeared in Cureus, a journal published by Springer Nature, the publisher of Nature, by researchers at the Maharishi Markandeshwar Institute of Medical Sciences and Research in Mullana, India (S. Banchhor et al. Cureus 16, e74625 (2024); retraction 18, r223 (2026)). (Nature’s news team is editorially independent of its publisher.)




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: