> I try to not be fatalistic. As I was trying to argue, it's historically inaccurate and it doesn't actually change the outcome.
This is false. Being fatalistic and 'panicking' can definitely influence and thus change the outcome. Your logic is similar to what is (incorrectly) used to dismiss the Y2K-problem, for instance: Looking back it seems like there was no need to panic, but that is only because a lot of people recognized the urgency, worked their ass off and succeeded in preventing shit from going horribly wrong.
Your handwaving is doing harm by lulling people into a false sense of security. Your initial comment amounts to "Ah, it'll be fine, don't worry about it. We'll adapt, we always have.", even though you provide absolutely no arguments specific to this enormous force of insanely rapid change in an already incredibly unstable fragile world. We might adapt, but it will require serious thought rather than handwaving and leaning back; even then it might come with massive societal upheaval and a lot of suffering.
I'm wrong to not be fatalistic?! You lost me here.
A lot of people seem to be wasting a lot of energy insisting it is all going to end in tears because <fill in reasons>. All I'm doing here is pointing out that people like this come out of the woodwork with pretty much every big change in society and then people adapt and things are society fails to collapse.
I'm not arguing there won't be changes and that they won't be disruptive to some people. Because they will and people will need to adjust. But I am arguing that a lot of the dystopian outcomes are as unlikely to happen with this particular change as they have been with previous rounds of changes. I just don't see a basis for it. I do see a lot of people who want this to be true mainly because they are afraid of having to adapt.
> already incredibly unstable fragile world
There are a lot of people arguing that things are better than ever by most metrics you might want to apply for that. The reason you might feel stressed about the news is that dystopian headlines sell better and you are being influenced by those. That's also why the Y2K got a lot more attention than it deserved in the media and then a lot of people indeed freaked out over that. Of course a lot of that got caught up in people believing for other reasons we are all doomed and that the apocalypse was coming. And it made for amusing headlines. So, it got a lot more attention than it deserved. And then the clock ticked over and society failed to collapse.
You largely ignored what I said and displayed exactly the fallacious behavior I was pointing out. Again, Y2K was not a problem because people 'freaked out' (took the problem seriously). Similarly, AI will only not be a problem due to people that spend time and effort to mitigate its issues, not due to people like you pretending that because nothing went seriously wrong in the past, nothing automatically will this time (because you "just don't see the basis for it").
This is false. Being fatalistic and 'panicking' can definitely influence and thus change the outcome. Your logic is similar to what is (incorrectly) used to dismiss the Y2K-problem, for instance: Looking back it seems like there was no need to panic, but that is only because a lot of people recognized the urgency, worked their ass off and succeeded in preventing shit from going horribly wrong.
See: https://en.wikipedia.org/wiki/Preparedness_paradox
Your handwaving is doing harm by lulling people into a false sense of security. Your initial comment amounts to "Ah, it'll be fine, don't worry about it. We'll adapt, we always have.", even though you provide absolutely no arguments specific to this enormous force of insanely rapid change in an already incredibly unstable fragile world. We might adapt, but it will require serious thought rather than handwaving and leaning back; even then it might come with massive societal upheaval and a lot of suffering.