Your post sort of hints at it, I think, but I'll state it clearly: Misalignment is the main threat when it comes to AI (and especially ASI).
A self-preserving AI isn't meaningfully more dangerous than an AI that solves world hunger by killing us all. In fact, it may be less so if it concludes that starting a war with humans is riskier than letting us live.
A self-preserving AI isn't meaningfully more dangerous than an AI that solves world hunger by killing us all. In fact, it may be less so if it concludes that starting a war with humans is riskier than letting us live.