Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I guess if you were using the LLM to process data from your customers, e.g. categorise their emails, then this argument would hold that they might be more risky.


Access to untrusted data. Access to private data. Ability to communicate with the outside. Pick two. If the LLM has all three, you're cooked.


Agreed. Some of the big companies seem to be claiming that by going with ReallyBitCompany's AI you can do this safely, but you can't. Their models are harder to trick, but simply cannot be made safe.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: