What if the insurance company used machine learning to calculate the premium, which resulted in correlations with gender, race, etc.? Is that also considered discrimination?Whose fault is that?
You would have to show that gender/race were not inputs to the machine learning algorithm. Correlations with inputs that were not restricted would not be a problem (in many situations ZIP code is "close enough" for race.)