Tuesday, January 02, 2024

AI predictions with the possibility of fairness?

 There's a bunch of work on impossibility results associated with machine learning and trying to achieve "fairness" - the bottom line is that if there is some characteristic that splits the population, and the sub-populations have different prevalence of some other characteristic, then designing a fair predictor that doesn't effectively discriminate against one or other sub-population isn't feasible.


one key paper on the impossibility result covers this (alternative is to build a "perfect" predictor, which is kind of infeasible).


On the other hand, some empirical studies show that this can be mitigated by building a more approximate predictor/classifier, perhaps, for example, employing split groups and even to try to achieve "fair affirmative action" - this sounds like a plan, but (I think - please correct me if I am wrong), assumes that you can

  • work out which group an individual should belong to
  • know the difference in prevalence between the sub-groups
Suggests also to me that it might be worth looking at causal inference over all the dimensions to see if we can even determine some external factors that need policy intervention to, perhaps, move the sub-populations towards having equal prevalence of those other characteristics (high school grade outcomes, risk of re-offending, choose your use case)....

I guess one very important  value of the work above is to make these things more transparent, however the policy/stats evolve.

No comments:

Blog Archive

About Me

My photo
misery me, there is a floccipaucinihilipilification (*) of chronsynclastic infundibuli in these parts and I must therefore refer you to frank zappa instead, and go home