> It boggles my mind how anyone can think otherwise.
Some AI dangers are certainly legitimate - it's easy to foresee how an image recognition system might think all snowboarders are male; or a system trained on unfair sentences handed out to criminals would replicate that unfairness, adding a wrongful veneer of science and objectivity; or a self-driving car trained on data from a country with few mopeds and most pedestrians wearing denim might underperform in a country with many mopeds and few pedestrians wearing denim.
But other AI dangers sound more like the work of philosophers and science fiction authors. The moment people start predicting the end of humans needing to work, or talking about a future evil AI that punishes people who didn't help bring it into existence? That's pretty far down in my list of worries.
> But other AI dangers sound more like the work of philosophers and science fiction authors.
Your ability to read this sentence right now when we have never met and may not even be on the same continent was once in the domain of science fiction. Don't underestimate technological progress, and specifically, don't underestimate the surprising directions it could go.
Some fantastical AI predictions will happen, most probably will not, and some utterly terrifying ones no one foresaw will almost certainly happen. The unknown unknowns should worry you, and AI is full of them.
> The unknown unknowns should worry you, and AI is full of them.
Sure, but where should that rank in my worries relative to 'designer babies' and 'rise of authoritarian states as economic powerhouses' and 'corporations that can commit crimes with impunity' and 'rising medical bills' and 'widening gap between rich and poor' and 'far right extremism' and 'water shortages' and 'economic crisis wipes out my savings' and 'cyber warfare targeting vital infrastructure' and 'rising obesity' and 'voter suppression' and the many other things a person could worry about?
In my view, the only wrong opinions on where to rank this are "at the very top" and "at the very bottom or not at all". We will only know the correct answer in hindsight, so the sensible position is to just start funding some legitimate AI safety research.
A whole lot of the arguments about why we shouldn't be concerned really boil down to "I cannot conceive of risk until that risk has materialized." Impossible to argue against, really.
Some AI dangers are certainly legitimate - it's easy to foresee how an image recognition system might think all snowboarders are male; or a system trained on unfair sentences handed out to criminals would replicate that unfairness, adding a wrongful veneer of science and objectivity; or a self-driving car trained on data from a country with few mopeds and most pedestrians wearing denim might underperform in a country with many mopeds and few pedestrians wearing denim.
But other AI dangers sound more like the work of philosophers and science fiction authors. The moment people start predicting the end of humans needing to work, or talking about a future evil AI that punishes people who didn't help bring it into existence? That's pretty far down in my list of worries.