Mind the Graph When Balancing Data for Fairness or Robustness
Jessica Schrouff, Alexis Bellot, Amal Rannen-Triki, Alan Malek, Isabela Albuquerque, Arthur Gretton, Alexander D'Amour, Silvia Chiappa
TL;DR
The paper analyzes when data balancing to remove dependencies among covariates, outcomes, and auxiliary factors yields fair or robust models, using a causal Bayesian network framework and a joint balancing operator ${Q}$ to relate ${P^t}$ and ${Q}$. It derives sufficient conditions under which balancing yields risk-invariance and optimality across target distributions ${ m P}$, and shows that balancing is not simply equivalent to removing causal edges, as ${Q}$ need not factorize according to the altered graph. Through semi-synthetic MNIST and Amazon reviews experiments and a CelebA case study, it demonstrates that balancing can both improve and degrade fairness/robustness depending on the task and graph, and that balancing can interact unfavorably with regularization. The work emphasizes using the task's causal graph to guide mitigation choices and provides diagnostic guidance for distinguishing failure modes when applying data balancing.
Abstract
Failures of fairness or robustness in machine learning predictive settings can be due to undesired dependencies between covariates, outcomes and auxiliary factors of variation. A common strategy to mitigate these failures is data balancing, which attempts to remove those undesired dependencies. In this work, we define conditions on the training distribution for data balancing to lead to fair or robust models. Our results display that, in many cases, the balanced distribution does not correspond to selectively removing the undesired dependencies in a causal graph of the task, leading to multiple failure modes and even interference with other mitigation techniques such as regularization. Overall, our results highlight the importance of taking the causal graph into account before performing data balancing.
