Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

chezmoi is a great tool, and I admire this project taking a strong stand. However I can’t help but feel that policies like this are essentially unenforceable as stated: there’s no way to prove an LLM wasn’t used to generate code. In many cases it may be obvious, but not all.


Some people post vulnerability disclosures or pull requests which are obviously fake and generated by LLM. One example: https://hackerone.com/reports/2298307

These people are collaborating in bad faith and basically just wasting project time and resources. I think banning them is very legitimate and useful. It does not matter if you manage to "catch" exactly 100% of all such cases or not.


I’m aware of the context, but as someone who frequently uses LLMs productively I find bans on all usage to be misguided. If in practice the rule is “if we can tell it’s AI generated, we’ll ban you” then why not just say that?

Moreover, in the case of high-quality contributions made with the assistance of LLMs, I’d rather know which model was used and what the prompt was.

Nonetheless I still understand and respect rejecting these tools, as I said in my first comment.


I don't think rules like that are meant to be 100% perfectly enforced. It's essentially a policy you can point to when banning somebody, and the a locus of disagreement. If you get banned for alleged AI use, you have to argue that you didn't use AI. It doesn't matter to the project if you were helpful and kind, the policy is no AI.


[I was wrong and posted a link to an earlier policy/discussion overridden by the OP]


Here it is:

> Any contribution of any LLM-generated content will be rejected and result in an immediate ban for the contributor, without recourse.

What about it changes the parent comment?


What are you talking about? The OP says "If you use ... banned without recourse" and the "more information" link manages to have even less information.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: