Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is precisely the problem: users still need to screen and reason about results of LLMs. I am not sure what is generating this implied permission structure, but it does seem to exist.

(I don't mean to imply that parent doesn't know this, it just seems worth saying explicitly)



It’s only a problem for people who care about its precision. If it’s right about 80-90% of stuff, it’s good enough.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: