Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"If the algorithm is predicting that 10% of white people and 30% of black people will do X, because that is what actually happens, some people will still call that racism but there is no possible way to change it without reducing accuracy."

What is actually happening? Does it tell you if they are they doing X precisely because they are black or white? The racist part might not be the numbers per se, but in the conclusion that the color of their skin has anything to do with their respective choices.

edit: spelling



ML is spitting out correlations, not an explicit causal model. If, in reality, X is only indirectly and accidentally correlated with race, but I look at the ML result and conclude the skin color has something to do with X, then the only racist element in the whole system is me.


Agreed. That was the point I was trying to get at, albeit I might not have phrased it as clearly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: