Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think I might trust an “AI” pilot more than a human one. Human pilots often fail to respond to unexpected conditions as well (see Air France Flight 447). Pilots have also gone nuts and crashed the plane intentionally (see Germanwings Flight 9525).

Something like this should at least allow airlines to use just one human pilot, who I doubt would do much 99% of the time. If there’s something that the computer can’t handle, the human could take over.



It's relatively easy to get the stats of fatal accidents that did happen, but how difficult is it to get the stats of accidents prevented by human pilots? Both are relevant to evaluating the impact of a 100% AI pilot.

I'm assuming the FAA in the United States has lots of records of incidents but I also suspect there could be many unreported incidents too. Companies can strong arm pilots into reporting internally or pilots may decide an incident was not worth reporting due to a potential impact on their career.

Does anyone know if there are regulations that require FAA reporting under specific conditions? What conditions qualify? How is this verified?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: