The uncomfortable truth to most "the algorithm is biased" takes is that we humans are far more politically biased than the algorithms and we're probably 90% to blame.
I'm not saying there is no algorithmic bias, and I tend to agree the X algorithm has a slight conservative bias, but for the most part the owners of these sites care more about keeping your attention than trying to get you to vote a certain way. Therefore if you're naturally susceptible to cultural war stuff, and this is what grabs your attention, it's likely the algorithm will feed it.
But this is far more broad problem. These are the types of people who might have watched political biased cable news in the past, or read politically biased newspapers before that.
the issue brought up in the article isn't that "the algorithm is biased" but that "the algorithm causes bias". A feed could perfectly alternate between position A and position B and show no bias at all, but still select more incendiary content on topic A and drive bias towards or away from it.
I have the same thought, my X algo has become less political than HackerNews. I suppose it depends on how you use it but my feed is entirely technical blogs, memes, and city planning/construction content
I've been pretty consistent about telling Bluesky I want to see less of anything political and also disciplined about not following anybody who talks about Trump or gender or how anybody else is causing their problems. I see very little trash.
Love the title. Yeah, agents need to experiment in the real world to build knowledge beyond what humans have acquired. That will slow the bastards down.
Maybe they are just calling the jobs by different names? It seems like names of roles are constantly shifting. "Data scientist" is a term that is going out of fashion.
I love the post but disagree with the first example. "I asked ChatGPT and this is what it said: <...>". That seems totally fine to me. The sender put work into the prompt and the user is free to read the AI output if they choose.
I think in any real conversation, you're treating AI as this authority figure to end the conversation, despite the fact it could easily be wrong. I would extract the logic out and defend the logic on your own feet to be less rude.
And what if you let a human expert fact-check the output of an LLM? Provided you're transparent about the output (and its preceding prompt(s)) ?
Because I'd much rather ask an LLM about a topic I don't know much about and let a human expert verify its contents than waste the time of a human expert in explaining the concept to me.
Once it's verified, I add it to my own documentation library so that I can refer to it later on.
Oh, I'm usually trying to gather information in conversations with peers, so for me, it's usually more like, "I don't know, but this is what the LLM says."
But yeah, to a boss or something, that would be rude. They hired you to answer a question.
This is exactly how I feel about both advertising and unnecessary notifications. "The signal functions to consume the resources of a recipient for zero payoff and reduced fitness. The signal is a virus."
reply