The writers of the stochastic parrots paper do not seem to be concerned about unaligned “AI” (text generators) posing a threat to humanity in the way that the doomsayers are. It’s in the title: “stochastic parrot” is a rebuke of those calling LLMs AGI.
Yes, that's correct. It is a slight rebuke, a bit tongue in cheek, but that paper is also old. I grouped them because the groups are, from my perspective, hard to separate. But there are arguably two camps, "AI safety" and "alignment." Perhaps they should work on that, too, form some camps and argue their cases with something defined instead of masquerading under alignment and AI safety depending on the day. But I could also be totally wrong. Until then, I don't believe either are really operating in reality.
IMO the conflation of the two is a purposeful move on the part of the AGI doomsday crowd, since they completely lack scientific rigor otherwise, they cite in bad faith. Timnit Gebru talks about it at length here: https://youtu.be/jAHRbFetqII.