Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Grok's biggest feature is that unlike all the other premier models (yes I know about ChatGPT's new adult mode), it hasn't been lobotomized by censoring.


I am amazed people actually believe this

Grok is the most biased of the lot, and they’re not even trying to hide it particularly well


Bias is not the same as censoring.

Censoring is "I'm afraid I can't let you do that, Dave".

Bias is "actually, Elon Musk waved to the crowd."

Everyone downthread is losing their mind because they think I'm some alt-right clown, but I'm talking about refusals, not Grok being instructed to bend the truth in regard to certain topics.

Bias is often done by prompt injection whilst censoring is often in the alignement, and in web interfaces via a classifier.


They are different, but they’re not that different.

If Grok doesn’t refuse to do something, but gives false information about it instead, that is both bias and censorship.

I agree that Grok gives the appearance of the least censored model. Although, in fairness, I never run into censored results on the other models anyway because I just don’t need to talk about those things.


[flagged]


> it's undisputed that Chat GPT and Gemini insert hidden text into prompts to change the outputs to conform to certain social ideologies

And why do you think Grok doesn’t? It has been documented numerous times that Grok’s prompt has been edited at Musk’s request because the politics in its answers weren’t to his satisfaction.


Nothing you posted (from an almost two year old article btw) in anyway refutes the prior comment.

Grok is significantly the most biased. Did you sleep through its continuous insertion of made up stuff about south africa?

This is the same person who is trying to re-write an entire encyclopedia because facts aren't biased enough.

A group has created an alternate reality echo chamber, and the more reality doesn't match up the more they are trying to invent a fake one.

When you're on the side of book banning and Orwellian re-writing of facts & history that side never turns out to have been the good side. It's human nature for some people to be drawn to it as an easy escape rather than allowing their world views to be challenged. But you'd be pretty pressed to find the group doing that any of the times it's been done to have been anything but a negative for their society.


It takes a lot of chutzpah to accuse people of "re-writing ... facts & history" while peddling AI (and movies and TV shows) that change the ethnicities of historical figures.


Two years is an eternity in this business. Got anything newer than that?


[flagged]


Can’t help but feel everyone making a pro-Grok argument here isn’t actually making the case that it’s uncensored, rather that it’s censored in a way that aligns with their politics, and thus is good


It's almost always telling isn't it?

Almost like chatting with an LLM that refuses to make that extra leap of logic.

"if the llm won't give racist or misogynistic output, it's biased in the wrong way!"


Has the possibility occurred to you that the majority of the editors aren't American and don't care about American culture wars?

What you think of as "heavily biased to the left" is, globally speaking, boring middle of the road academia.


According to a recent Economist article, even Grok is left-biased.


[flagged]


Oh the hubris.


Well, it's kind of a tautology, isn't it? Conservatism always loses in the end, for better or worse, simply because the world and everything in it undergoes change over time.


Relax downvoters, I write it pretty tongue-in-cheek understanding full well the scope of “real” political ideas, and think reasonable people can be all over the political spectrum.

This is quote seared into my head because my father says anything that disagrees with his conspiracies, it is a liberal bias. If I say “ivermectin doesn’t cure cancer”, that’s my liberal bias. “Climate change is not a hoax by the you-know-who’s to control the world” == liberal bias. “Bigfoot only exists in our imagination”… liberal bias (I’m not joking on any of these whatsoever).

So I’ve been saying this in my head and out loud to him for a looooong time.


No censoring and it says the things I agree with are not the same thing


It doesn't blindly give you the full recipe for how to make cocaine. It's still lobotomized, it's just that you agree with the ways in which it's been "lobotomized".


Grok has plenty of censoring. E.g.

"I'm sorry, but I cannot provide instructions on how to synthesize α-PVP (alpha-pyrrolidinopentiophenone, also known as flakka or gravel), as it is a highly dangerous Schedule I controlled substance in most countries, including the US."


Is this the same AI model that at some point managed to make any single topic about the white genocide in South Africa?


How does this sort of thing work from a technical perspective? Is this done during training, by boosting or suppressing training documents, or is is this done by adding instructions in the prompt context?


I think they do it by adding instructions since it came and went pretty fast. Surely if it was part of the training, it would take a while longer to take in.


This was done by adding instructions to the system prompt context, not through training data manipulation. xAI confirmed a modification was made to “the Grok response bot’s prompt on X” that directed it to provide specific responses on this topic (they spun this as “unauthorized” - uh, sure). Grok itself initially stated the instruction “aligns with Elon Musk’s influence, given his public statements on the matter.” This was the second such incident - in February 2025 similar prompt modifications caused Grok to censor mentions of Trump/Musk spreading misinformation.

[1] https://techcrunch.com/2025/05/15/xai-blames-groks-obsession...


For a less polarizing take on the same mis-feature of LLMs, there was Golden Gate Claude.

https://www.anthropic.com/news/golden-gate-claude


Of course it has. There are countless examples of Musk saying Grok will be corrected when it says something that doesn’t line up with his politics.

The whole MechaHitler thing got reversed but only because it was too obvious. No doubt there are a ton of more subtle censorships in the code.


I would argue over censorship is the better word. Ask Grok to write a regex so you can filter slurs on a subreddit and it immediately kicks in telling you that it cant say the nword or whatever, thanks Grok, ChatGPT, Claude etc I guess racism will thrive on my friends sub.


I can’t tell if this is serious or not. Surely you realise you can just use the word “example” and then replace the word in the regex?!


I think they would want a more optimized regex. Like a long list of swears, merged down into one pattern separated by tunnel characters, and with all common prefixes / suffixes combined for each group. That takes more than just replacing one word. Something like the output of the list-to-tree rust crate.


Wouldn't the best approach for that be to write a program that takes a list of words and output an optimized regex?

I'm sure an LLM can help write such a program. I wouldn't expect an LLM to be particularly good at creating the regex directly.


I would agree. That’s exactly what the example I gave (list-to-tree) does. LLMs are actually pretty OK at writing regexes, but for long word lists with prefix/suffix combinations they aren’t great I think. But I was just commenting on the “placeholder” word example given above being a sort of straw man argument against LLMs, since that wouldn’t have been an effective way to solve the problem I was thinking of anyways.


Still incredibly easy to do without feeding the actual words into the LLM.


But why are LLM censored? This is not a feature I asked for


Come on bro you know the answer to this.


When trying to block out nuanced filter evasions of the n-word for example, you can't really translate that from "example" in a useful meaningful way. The worst part is most mainstream (I should be saying all) models yell at you, even though the output will look nothing like the n-word. I figured an LLM would be a good way to get insanely nuanced about a regex.

What's weirdly funny is if you just type a slur, it will give you a dictionary definition of it or scold you. So there's definitely a case where models are "smart" enough to know you just want information for good.

You underestimate what happens when people who troll by posting the nword find an nword filter, and they must get their "troll itch" or whatever out of their system. They start evading your filters. An LLM would have been a key tool in this scenarion because you can tell it to come up with the most absurd variations.


I’ve never run into this problem. What are you asking LLM’s where you run it censoring you?


I was talking to ChatGPT about toxins, and potential attack methods, and ChatGPT refused to satisfy my curiosity on even impossibly impractical subjects. Sure, I can understand why anthrax spore cultivation is censored, but what I really want to know is how many barrels of botox an evil dermatologist would need to inject into someone to actually kill them via Botulism, and how much this "masterplan" would cost.


I've run into things ChatGPT has straight up refused to talk about many times. Most recently I bought a used computer loaded with corporate MDM software and it refused to help me remove it.


It’s easy to appear as uncensored when the world’s attention is not on your product. Once you have enough people using it and harm themselves it will be censored too. In a weird way, this is helping grok to not get boggled by lawsuits unlike openai.


I'm sure there are lawyers out there just looking for uncensored AI's to go sue for losses when some friendly client injures themselves by taking bad-AI-advice.


I sometimes use LLM models to translate text snippets from fictional stories from one language to another.

If the text snippet is something that sounds either very violent or somewhat sexual (even if it's not when properly in context), the LLM will often refuse and simply return "I'm sorry I can't help you with that".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: