Hacker Newsnew | past | comments | ask | show | jobs | submit | jmugan's commentslogin

Oddly enough, X is the only platform i've been able to teach to not show me culture war stuff, from either side. It just shows me AI in the "For You."

The uncomfortable truth to most "the algorithm is biased" takes is that we humans are far more politically biased than the algorithms and we're probably 90% to blame.

I'm not saying there is no algorithmic bias, and I tend to agree the X algorithm has a slight conservative bias, but for the most part the owners of these sites care more about keeping your attention than trying to get you to vote a certain way. Therefore if you're naturally susceptible to cultural war stuff, and this is what grabs your attention, it's likely the algorithm will feed it.

But this is far more broad problem. These are the types of people who might have watched political biased cable news in the past, or read politically biased newspapers before that.


the issue brought up in the article isn't that "the algorithm is biased" but that "the algorithm causes bias". A feed could perfectly alternate between position A and position B and show no bias at all, but still select more incendiary content on topic A and drive bias towards or away from it.

I have the same thought, my X algo has become less political than HackerNews. I suppose it depends on how you use it but my feed is entirely technical blogs, memes, and city planning/construction content

I've been pretty consistent about telling Bluesky I want to see less of anything political and also disciplined about not following anybody who talks about Trump or gender or how anybody else is causing their problems. I see very little trash.

Maybe it has gotten better recently. I tried and tried with Bluesky, but it would not abide.

It was bad the week Trump got elected, it’s gotten better since then.

Love the title. Yeah, agents need to experiment in the real world to build knowledge beyond what humans have acquired. That will slow the bastards down.


Perhaps they will revel in the friends they made along the way.


If only we had a battle tested against reality self learning system.


I agree but I fear it won't stay that way. They boil us frogs slowly.


I just finished reading House of Leaves by Mark Z. Danielewski, and this could have been one of the chapters.


Ha, funnily enough I just bought it after hearing about it on that thread with the 90's 'strange website'. Not cracked it open yet, it's quite a tome.


Don't read it, unless you're in for psychological trauma. That book is messed up


He has a new one out too. Also, he's the brother of the singer Poe.


Skip the Johnny Truant bits and its half the length with hardly anything of value lost.


Maybe they are just calling the jobs by different names? It seems like names of roles are constantly shifting. "Data scientist" is a term that is going out of fashion.


Pretty large claim to insinuate Indeed can't even tell when their own users are simply shifting terms around...

This is the company so large, their jobs data was used in lieu of the Fed's jobs data when the gov was shut down.


Which side are you talking about?


You don't say?


It's also a lot more fun to lose weight than maintain weight


I love the post but disagree with the first example. "I asked ChatGPT and this is what it said: <...>". That seems totally fine to me. The sender put work into the prompt and the user is free to read the AI output if they choose.


I think in any real conversation, you're treating AI as this authority figure to end the conversation, despite the fact it could easily be wrong. I would extract the logic out and defend the logic on your own feet to be less rude.


And what if you let a human expert fact-check the output of an LLM? Provided you're transparent about the output (and its preceding prompt(s)) ?

Because I'd much rather ask an LLM about a topic I don't know much about and let a human expert verify its contents than waste the time of a human expert in explaining the concept to me.

Once it's verified, I add it to my own documentation library so that I can refer to it later on.


Oh, I'm usually trying to gather information in conversations with peers, so for me, it's usually more like, "I don't know, but this is what the LLM says."

But yeah, to a boss or something, that would be rude. They hired you to answer a question.


This is exactly how I feel about both advertising and unnecessary notifications. "The signal functions to consume the resources of a recipient for zero payoff and reduced fitness. The signal is a virus."


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: