Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I'd actually be amazed if you can name a single case where someone has deployed a language model in a safety critical system. That's why your examples are all what-ifs.

AI safety is not a near-term project, it's a long-term project. The what-ifs are exactly the class of problems that need solving. Like it or not, current and next generation LLMs and similar systems will be used in safety critical contexts, like predictive policing which is already a thing.

Edit: and China is already using these things widely to monitor their citizens, identify them in surveillance footage and more. I find the claim that nobody is using LLMs or other AI systems in some limited safety critical contexts today pretty implausible actually.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: