4) Do you believe a "rogue super-intelligence" requires the "artificial" part?
Because from what I've seen, however you define "super intelligence", there are eg highly organised groups of people that are way closer to meeting that definition than any known software or technology.
In my view, those current "highly organised groups" have "antagonists" with comparable capabilities PLUS consist of individuals that have to be "aligned", which seems sufficient for stability so far.
> AI wouldn't have antagonists with comparable capabilities? Why?
Not individual/human ones. Relying on other AIs to prevent the AI apocalypse seems very optimistic to me-- but may be viable (?)
> Also, no, individuals are not a problem. Not after Nazis, Red Khmer, and Russians.
Those are examples were the "alignment" of participating individuals was successful enough. But all those examples seem very fragile to me, and would be even less stable if your main intermediate goal was literally to "end all of humanity".
Because from what I've seen, however you define "super intelligence", there are eg highly organised groups of people that are way closer to meeting that definition than any known software or technology.