Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Or use it for fact discovery if you're not well-versed in a field. But always check if it hallucinated reasonable sounding gibberish.


The thing is that if you are not an expert in the field you cannot tell gibberish from legit facts. Especially if the writing style and grammar are top-notch.

I know it is a bad bias, but we typically associate good and clear writing with legitimacy. Here we have chat GPT, that can do exactly that, but spit out complete bs.


One thing I like about phind.com is that it ties the specific assertions to the specific web page it made it from. That allows me to check into the sources.

However, like all generative AI, it’s good at forming narratives, and not many people are aware how powerfully influential narrative frames are because people rarely step back to examine the frame itself.


Yep. People rarely step back to examine how a narrative was framed. It is a cognitive bias [1] built into our brains.

It takes a lot of mental effort to spot, which is why we don’t do it often.

1: https://en.wikipedia.org/wiki/Framing_effect_(psychology)


If you pay for ChatGPT Pro you can ask it to link to real (not hallucinated) web pages from Bing's index.

I'm not promoting it. Just in case you're not aware of this feature.


Enjoy it while it lasts - soon the Bing's index will be full of hallucinated GPT content itself.


> The thing is that if you are not an expert in the field you cannot tell gibberish from legit facts.

Some answers are more easily verifiable than other.

If I ask about an explanation of quantum mechanics, I might check if the names of particles and equations are correct (which they probably will be), but verifying the overall reasoning is basically the same as knowing the subject in the first place.

But if I ask 'tell me the 5 most important neutrino experiments performed at the LHC and their outcomes', I can relatively easily check the results by finding the papers, their citation numbers, and reading the abstracts - I don't need to understand every detail. Maybe it will have missed one that should have been on the list, much like a manual search could have, but I won't fall for an outright hallucination.

And if I ask about, oh I don't know, let's say to find some precedents for a case I'm working on, it's straightforward to take the case names that the LLM spat out, look for them in a legal database, and see if the judge actually ruled what the LLM claimed he had.


Or use it to retrieve alleged facts from its own output, pass those alleged facts to another tool (like a legal search engine) for verification, and then get GPT to edit its output accordingly…




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: