Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've never understood their reasoning. It sounds like a Most Interesting Man in the World commercial: "I don't always tackle the hardest AI problems known to mankind, but when I do, I tie one hand behind my back by not fusing data from every possible sensor I can find in the DigiKey catalog."

IR lidar would be pretty useful in rain and fog, I'd think. But I'd rather have all three -- lidar, radar, and visual. Hell, throw in ultrasonic sonar too. That's what Kalman filters are for. Maybe then the system will notice that it's about to ram a fire truck.



The puzzle piece you are missing is that sensor fusion is not an easy problem either. The Tesla perspective is that adding N different sensors into the mix means you now have N*M problems instead of M.


I hope that's not their perspective, because that perspective would be wrong. There are entire subdisciplines of control theory devoted to sensor fusion, and it's not particularly new. Rule 1: More information is better. Rule 2: If the information is unreliable (and what information isn't?), see rule 1.

Some potential improvements are relatively trivial, even without getting into the hardcore linear algebra. If the camera doesn't see an obstacle but both radar and lidar do, that's an opportunity to fail relatively safely (potential false braking) rather than failing in a way that causes horrific crashes.

Bottom line: if you can't do sensor fusion, you literally have no business working on leading-edge AI/ML applications.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: