Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLM argumentative essays tend to have this "gish-gallop" energy; say a bunch of tenuously related and vaguely supported things, leave the reader wondering if it was the author who failed to connect the dots, or them


Yes, so do human ones (just not the ones that filter through to you). The output is like this because the training data is like this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: