Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Your post sort of hints at it, I think, but I'll state it clearly: Misalignment is the main threat when it comes to AI (and especially ASI).

A self-preserving AI isn't meaningfully more dangerous than an AI that solves world hunger by killing us all. In fact, it may be less so if it concludes that starting a war with humans is riskier than letting us live.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: