"If the algorithm is predicting that 10% of white people and 30% of black people will do X, because that is what actually happens, some people will still call that racism but there is no possible way to change it without reducing accuracy."
What is actually happening? Does it tell you if they are they doing X precisely because they are black or white? The racist part might not be the numbers per se, but in the conclusion that the color of their skin has anything to do with their respective choices.
ML is spitting out correlations, not an explicit causal model. If, in reality, X is only indirectly and accidentally correlated with race, but I look at the ML result and conclude the skin color has something to do with X, then the only racist element in the whole system is me.
What is actually happening? Does it tell you if they are they doing X precisely because they are black or white? The racist part might not be the numbers per se, but in the conclusion that the color of their skin has anything to do with their respective choices.
edit: spelling