Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not sure about this project in particular, but many more popular projects (curl comes to mind) have adopted similar policies not out of spite but because they'd get submerged by slop.

Sure, a smart guy with a tool can do so much more, but an idiot with a tool can ruin it for everyone.



Isn't it then more reasonable to have a policy that "people who submit low quality PRs will be banned"? Target the actual problem rather than an unreliable proxy of the problem.

LLM-generated code can be high quality just as human-generated code can be low quality.

Also, having a "no recourse" policy is a bit hostile to your community. There will no doubt be people who get flagged as using LLMs when they didn't and denying them even a chance to defend themselves is harsh.


Banning LLMs can result in shorter arguments. "Low quality" is overly subjective and will probably take a lot of time to argue about. And then the possible outrage if it is taken to social media.


> Banning LLMs can result in shorter argument

Can it really? "You submitted LLM-generated contributions" is also highly subjective. Arguably more so since you can't ever really be sure if somethingi s AI generated while with quality issues there are concrete things you can point to (e.g. it the code simply doesn't work, doesn't meet the contributor guidelines, uses obvious anti-patterns etc.).


If you rtfa, you will find it's actually the other way around. The linked PR from the AI has "concrete things you can point to" like "the code simply doesn't work".


Here's a post from Daniel Stenberg (curl maintainer) announcing that he just landed 22 LLM-generated commits.

https://mastodon.social/@bagder/115241241075258997

So obviously curl doesn't have a blanket ban.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: