Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There's a way - inject garbage prompts, like in the content meant to be the example - humans might understand that this is in an "example" context, but LLMs are likely to fail as prompt injection is an unsolved problem.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: