Table of Contents
Fetching ...

Investigating Data Interventions for Subgroup Fairness: An ICU Case Study

Erin Tan, Judy Hanwen Shen, Irene Y. Chen

Abstract

In high-stakes settings where machine learning models are used to automate decision-making about individuals, the presence of algorithmic bias can exacerbate systemic harm to certain subgroups of people. These biases often stem from the underlying training data. In practice, interventions to "fix the data" depend on the actual additional data sources available -- where many are less than ideal. In these cases, the effects of data scaling on subgroup performance become volatile, as the improvements from increased sample size are counteracted by the introduction of distribution shifts in the training set. In this paper, we investigate the limitations of combining data sources to improve subgroup performance within the context of healthcare. Clinical models are commonly trained on datasets comprised of patient electronic health record (EHR) data from different hospitals or admission departments. Across two such datasets, the eICU Collaborative Research Database and the MIMIC-IV dataset, we find that data addition can both help and hurt model fairness and performance, and many intuitive strategies for data selection are unreliable. We compare model-based post-hoc calibration and data-centric addition strategies to find that the combination of both is important to improve subgroup performance. Our work questions the traditional dogma of "better data" for overcoming fairness challenges by comparing and combining data- and model-based approaches.

Investigating Data Interventions for Subgroup Fairness: An ICU Case Study

Abstract

In high-stakes settings where machine learning models are used to automate decision-making about individuals, the presence of algorithmic bias can exacerbate systemic harm to certain subgroups of people. These biases often stem from the underlying training data. In practice, interventions to "fix the data" depend on the actual additional data sources available -- where many are less than ideal. In these cases, the effects of data scaling on subgroup performance become volatile, as the improvements from increased sample size are counteracted by the introduction of distribution shifts in the training set. In this paper, we investigate the limitations of combining data sources to improve subgroup performance within the context of healthcare. Clinical models are commonly trained on datasets comprised of patient electronic health record (EHR) data from different hospitals or admission departments. Across two such datasets, the eICU Collaborative Research Database and the MIMIC-IV dataset, we find that data addition can both help and hurt model fairness and performance, and many intuitive strategies for data selection are unreliable. We compare model-based post-hoc calibration and data-centric addition strategies to find that the combination of both is important to improve subgroup performance. Our work questions the traditional dogma of "better data" for overcoming fairness challenges by comparing and combining data- and model-based approaches.

Paper Structure

This paper contains 32 sections, 8 equations, 24 figures, 6 tables.

Figures (24)

  • Figure 1: Change in Overall and subgroup-level accuracy after Whole-Source data addition (Logistic Regression). The change in Overall performance (a) does not reflect equally upon changes in performance across subgroups. For example, we observe that adding data from any source to Target Hospital 458 improves overall accuracy. While this change is reflected across the White (b) and Black (c) subgroups, the Other (d) subgroup experiences near-uniform decreases in test performance as a result of data addition. Scaling data directly from the Target Hospital (diagonal of each plot) generally yields improvements in overall and subgroup accuracy, but is typically not the best-performing data addition choice.
  • Figure 2: Change in subgroup ratio vs. Change in subgroup test accuracy after Whole-Source data addition on the eICU Dataset.
  • Figure 3: Change in subgroup accuracy as a function of samples added in Subgroup-Level data addition (see Section \ref{['sec:experiments']} for details). Across nearly all combinations of subgroups and Test Hospitals, we find that adding more samples does not necessarily lead to larger performance gains. These visualizations lead to the conclusion that naive subgroup balancing is an uninformative data selection heuristic.
  • Figure 4: Subgroup similarity score (using features and labels) vs. Change in subgroup test accuracy for the White (left) and Black (right) subgroups (eICU). The scores are computed using only the patients from the target subgroup in each source. These performance changes result from Subgroup-Level data addition. We do not observe statistically significant correlations in any Test Hospital for either subgroup.
  • Figure 5: Mean consistency and subgroup performance: (a) Subgroup Mean Discrepancy (Eq. \ref{['eq:mean-discrepancy']}) vs. Subgroup Test Accuracy (Eq. \ref{['eq:acc']}). (b) Change in Subgroup Mean Discrepancy vs. Change in Subgroup Test Accuracy. Strong negative correlations are observed across all subgroups in (a), and in all minority subgroups in (b).
  • ...and 19 more figures