Some people post vulnerability disclosures or pull requests which are obviously fake and generated by LLM. One example: https://hackerone.com/reports/2298307
These people are collaborating in bad faith and basically just wasting project time and resources. I think banning them is very legitimate and useful. It does not matter if you manage to "catch" exactly 100% of all such cases or not.
I’m aware of the context, but as someone who frequently uses LLMs productively I find bans on all usage to be misguided. If in practice the rule is “if we can tell it’s AI generated, we’ll ban you” then why not just say that?
Moreover, in the case of high-quality contributions made with the assistance of LLMs, I’d rather know which model was used and what the prompt was.
Nonetheless I still understand and respect rejecting these tools, as I said in my first comment.
These people are collaborating in bad faith and basically just wasting project time and resources. I think banning them is very legitimate and useful. It does not matter if you manage to "catch" exactly 100% of all such cases or not.