This is not true. I've been building a site with Sveltekit and Appwrite for fun. I know nothing about these technologies. ChatGPT has saved me hours of messing around and I haven't needed SO once. It doesn't always give me the best answer first, sure. It's an iterative process, which all software development is anyway.
That is mostly correct but thing is, if you go on stackoverflow or chatGPT to have someone else write the solution for you, that's on you to use the wrong solution.
I've used chatGPT (as well as SO as many others) but always in the context of having enough knowledge (either from experience or experimenting before opening a question) to understand if the answers were just wrong or not.
To each their own but I treat ChatGPT as a rubber duck 2.0 and I think that's what it excel at.
Any one tool is bad at detecting AI but combining both human intuition and multiple automatic tools can get a very accurate result. Moderators already only make suspensions when multiple systems agree.
Not to mention the best AI might be hard to distinguish but most AI content is quite obvious (when reading carefully) because how bad it is. It can trick a casual reader but not a experienced moderator who knows about the subject matter.