Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There's a fuzzy line between an agent analyzing the content of a single page I requested, and one making many page fetches on my behalf. I think it's fair to treat an agent that clicks an invisible link as a robot/crawler since that agent is causing more traffic than a regular user agent (browser).

Just trying to make the point that an LLM powered user agent fetching a single page at my request isn't a robot.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: