Yes, it's difficult to detect whether something is enemy propaganda if you only look at the content. During WWII, sometimes propagandists would take an official statement (e.g. the government claiming that food production was sufficient and there were no shortages) and redirect it unchanged to a different audience (e.g. soldiers on a part of the front with strained logistics). Then the official statement and enemy propaganda would be exactly the same! The propaganda effect coming from the selection of content, not its truth or falsity.
But it's very easy to detect whether something is enemy propaganda without looking at the content: if it comes from an enemy source, it's enemy propaganda. If it also comes from a friendly source, at least the enemy isn't lying, though.
A company that doesn't wish to pick a side can still sidestep the issue of one source publishing a completely made-up story by filtering for information covered by a wide spectrum of sources at least one of which most of their users trust. That wouldn't completely eliminate falsehoods, but make deliberate manipulation more difficult. It might be playing the game, but better than letting the game play you.
Of course such a process would in practice be a bit more involved to implement than just feeding the top search results into an LLM and having it generate a summary.
> Then the official statement and enemy propaganda would be exactly the same! The propaganda effect coming from the selection of content, not its truth or falsity.
Exactly. Redistributing information out of context is such a basic technique that children routinely reinvent it when they play one parent off of the other to get what they want.
But it's very easy to detect whether something is enemy propaganda without looking at the content: if it comes from an enemy source, it's enemy propaganda. If it also comes from a friendly source, at least the enemy isn't lying, though.
A company that doesn't wish to pick a side can still sidestep the issue of one source publishing a completely made-up story by filtering for information covered by a wide spectrum of sources at least one of which most of their users trust. That wouldn't completely eliminate falsehoods, but make deliberate manipulation more difficult. It might be playing the game, but better than letting the game play you.
Of course such a process would in practice be a bit more involved to implement than just feeding the top search results into an LLM and having it generate a summary.