The "effective altruism" movement (of which 80,000 Hours is a part) has long been preoccupied with preventing a malevolent, superintelligent AI from killing or enslaving humanity (they call this fostering "friendly AI"). Their position is that this is a low-probability but extremely severe risk that few people are working on preventing.
Whether AI is really more dangerous than, say, pandemics or asteroids, is left as an exercise for the reader.
AI safety isn't an EA "preoccupation"; it's just weird enough and noticeable enough that it's easy to mistake existence and prevalence. It's also not even their weirdest position.
The first question on their list is about the 'problem' of wild animal suffering - and I've personally seen EAs argue that, because some animals are carnivorous, nature should be destroyed.
That's not even the weirdest position EAs take. Look up Brian Tomasik. Specifically, his paper about the possibility that electrons might suffer.
Concern about superhuman AI is one thing; bullet-biting utilitarianism is another entirely.
(This isn't the only place where their philosophical framework is stuck in the British Empire; they also tend to take a teleological view of history and moral development, and believe that their views are the self-evident progression of ethical development that every culture and civilization will come to eventually. They may not be as bad about this now as they used to be - there are questions about China now - but I don't think they're quite to the point of coming to terms with cultural contingency yet.)
It's a preoccupation because EA is mostly a rationalist thing, and Elizier Yudlowsky has had tremendous influence on that movement by being involved with Less Wrong. His views on AI have kind of become a mainstream position among them.
80k hours is more a cultural snapshot of the rationalist movement than anything.
Whether AI is really more dangerous than, say, pandemics or asteroids, is left as an exercise for the reader.