Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In my frank opinion this optimism is simply, sorry, stupid.

These tools will not stop at where they are. They will get better. They will more and more outperform more and more humans. They will become both more general (as opposed to good at some narrow tasks like writing text) and more capable.

They will become "smarter than humans", first a little smarter in some respects, then much smarter on more respects. Eventually they will become superintelligent, in the same way in which we are superintelligent relative to a chimpanzee or an an ant.

We will lose our jobs. Human labor will increasingly become worthless. We will be more hindrance than help, similar to how a monkey can't earn small amounts of money. His limited intelligence is not of limited use, instead it has negative value in any real work environment.

But things will not stop at mass unemployment. People (e.g.) in Europe will not receive a significant UBI when the taxable AI companies are all US or Chinese companies. As wages tend to zero, people without investments in stocks or land will be increasingly impoverished compared to the rest.

There will be an arms race, likely between the US an China. Other countries without big AI companies will be left in the dust. Each country big in AI will be incentivized to push forward as fast and recklessly as possible, security be damned. The Manhattan or Apollo programs will be a joke in comparison. Because only the country that achieves true superintelligence first will get a chance of controlling the future.

But the question is not so much whether the US or China wins this arms race. The question is whether the resulting superintelligent system(s) will be controllable at all in the long term. They might escape human control relatively quickly once they significantly outperform us in all cognitive abilities. Similar to how animals can't control humans and children aren't in charge of their parents.

And even if we somehow solve the problem of controlling things that are vastly smarter than us: In the long term, gradual disempowerment awaits. We will voluntarily offload more and more tasks to AI, more and more decisions, because anything else will become increasingly inefficient. And at some point we will realize that we have lost control, silently and forever, quite a while ago. The point of no return would be invisible.

And eventually, the coming AI race may simply get rid of us, not because of maliciousness, but because we are in the way. In the way of projects too large for us to comprehend, similar to how no animal can begin to comprehend why its habitat is getting destroyed by human urbanization.

The only slim hope is that superintelligence will be created in such a way which ensures that they want to care for us, like some people genuinely care for their favorite pets. Then we will get our AI utopia, then our future will be bright. But I wouldn't bet on it.

Yet all these considerations are readily ignored. Most people can't or refuse to extrapolate the exponential AI growth of the last years. Or they think AI will "hit a wall" slightly before superhuman intelligence. Which is astonishingly stupid. No way say it in any other way.

It's all so obvious.

It's all so obvious.



i suspect its somewhere in the middle. There will be an economic crash, when that happens most the AI companies will go because they have been devouring loads of of the economy for very little improvement.

A lot of people will lose their jobs, and houses and family. The people wont have the money to buy the products the AI tools are supposed to make cheap. MORE companies will go under. this leads to civil unrest.... almost globally.

A new form or government will arise. nfi what its going to be, but the concept of plundering the earth for corp greed and printing the economy into oblivion will be the back bone of what the new forms of gov and economy WONT do.


All of a sudden, a nuclear war that destroys our global capability to produce advanced microchips doesn't sound so bad, does it?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: