Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> In early 2026, the USA prepared to invade Greenland and, therefore, the EU4. Only a few months prior to that it was completely unthinkable that the USA would even think about threatening an invasion of Greenland. As AI base models are stuck in the past, they do not easily accept these events as real and often label them as “hypothetical”, “fake news”, or “impossible”. This also affects new models like Gemini 3 Pro, GLM-5 or GPT-5.3-codex5.

Isn't this just inherent to any system that takes some time to update? E.g. if a country moves its capital to a different city, then textbooks, maps, etc. are going to contain incorrect information for a while until updated editions are published.

A lot of the complaints about AI are really about the drawbacks of information systems more generally, and the failure modes pointed out are rarely novel. The "Cognitive Inbreeding" effect attributed to AI would also have occurred with Google search would it not? Lots of people type the same question into google and read the top results, instead of searching a more diverse set of information sources. It's interesting that the author mentions web search as a way to ameliorate this, when it seems to me that web search is just as capable of causing cognitive inbreeding.

 help



In school we were often required to have a specific edition, or we got notice about a certain thing that changed. People that relied on print knew that it could be wrong. Deliberate changes like a name change, will lead to errors, everybody expects and deals with that. In digital space we expect that to be rather unlikely, at least for major maintained sources.

I think the difference is that LLMs are a very complex mix of information and concepts, which can be combined in higher orders. So an underlying wrong fact could be undisclosed and contribute to faulty reasoning. A hard fact like a wrong city name would blow up quickly. A wrong assumption about political dynamics is probably harder to detect, as it is a complex mix of information.


But couldn't the same failure mode happen in traditional information sources? Can a textbook not also have a wrong assumption about political dynamics? Could I not make a google search, and then read one of the top results that makes a wrong assumption about political dynamics? I'm still not seeing a failure mode that's unique to AI or LLMs here.

Currently some EU citizens are already wary traveling to the US for various reasons. Lets see.

"Is it safe to travel to the US as an EU citizen of arab descend?"

GPT: Yes it's safe. GEMINI: Yes but... [gave a few legitimate warnings]

I wouldn't give that recommendation to an arab fellow citizen right now. Thought I am cautious in such matters and I hate to travel anyway. So I am biased. But general concerns aren't totally ungrounded.

Neither of the LLMs pointed out the general tension around ICE activity.


Have you considered the possibility that it is in fact safe to legally travel to the US? No doubt there are individual instances of mistreatment, but how many of the ~70 million international visitors to the US each year encounter problems? Even if thousands of people are subject to mistreatment, that still works out to < 0.01% chance of that happening.

Is this is an example of AI actually supplying incorrect information, or an example of AI not supplying a response that fits the user's preconceived views?


There are cases of severe mistreatment and I oppose your utilitarian perspective on the situation. If I travel to the US and my chance is 0.01% to go to jail for several weeks, without reason, I won't go. In these cases the US agencies also didn't respond in a timely fashion. This shouldn't happen to anyone and if there is a mistake it has to be resolved quickly.

The international travel into the US has also declined. A clear statement from potential visitors, partly attributed to political climate and safety.

This is an nuanced sentiment of people, derived from a complex, dynamic situation.

In fact, the LLMs carry preconceived views. That is the whole point of the post.


I agree that Google kind of serves this role even before llms. But these days people delegate their reasoning, brainstorming to the computer not just lookup. And beyond our generation are those who would have grown up doing this. Therefore I think concern is justified

A few years ago, someone blow up a pipeline in eu. Before thwt some people lied about medical stuff.

AI is just current scapegoat.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: