Table of Contents
Fetching ...

From Biased Selective Labels to Pseudo-Labels: An Expectation-Maximization Framework for Learning from Biased Decisions

Trenton Chang, Jenna Wiens

TL;DR

Inspired by causal models of selective labels, Disparate Censorship Expectation-Maximization (DCEM), an algorithm for learning in the presence of disparate censorship, is proposed and validated on synthetic data, showing that it improves bias mitigation without sacrificing discriminative performance compared to baselines.

Abstract

Selective labels occur when label observations are subject to a decision-making process; e.g., diagnoses that depend on the administration of laboratory tests. We study a clinically-inspired selective label problem called disparate censorship, where labeling biases vary across subgroups and unlabeled individuals are imputed as "negative" (i.e., no diagnostic test = no illness). Machine learning models naively trained on such labels could amplify labeling bias. Inspired by causal models of selective labels, we propose Disparate Censorship Expectation-Maximization (DCEM), an algorithm for learning in the presence of disparate censorship. We theoretically analyze how DCEM mitigates the effects of disparate censorship on model performance. We validate DCEM on synthetic data, showing that it improves bias mitigation (area between ROC curves) without sacrificing discriminative performance (AUC) compared to baselines. We achieve similar results in a sepsis classification task using clinical data.

From Biased Selective Labels to Pseudo-Labels: An Expectation-Maximization Framework for Learning from Biased Decisions

TL;DR

Inspired by causal models of selective labels, Disparate Censorship Expectation-Maximization (DCEM), an algorithm for learning in the presence of disparate censorship, is proposed and validated on synthetic data, showing that it improves bias mitigation without sacrificing discriminative performance compared to baselines.

Abstract

Selective labels occur when label observations are subject to a decision-making process; e.g., diagnoses that depend on the administration of laboratory tests. We study a clinically-inspired selective label problem called disparate censorship, where labeling biases vary across subgroups and unlabeled individuals are imputed as "negative" (i.e., no diagnostic test = no illness). Machine learning models naively trained on such labels could amplify labeling bias. Inspired by causal models of selective labels, we propose Disparate Censorship Expectation-Maximization (DCEM), an algorithm for learning in the presence of disparate censorship. We theoretically analyze how DCEM mitigates the effects of disparate censorship on model performance. We validate DCEM on synthetic data, showing that it improves bias mitigation (area between ROC curves) without sacrificing discriminative performance (AUC) compared to baselines. We achieve similar results in a sepsis classification task using clinical data.

Paper Structure

This paper contains 90 sections, 14 theorems, 71 equations, 33 figures, 4 tables, 1 algorithm.

Key Result

Theorem 3.1

The posterior distribution of $y^{(i)}$ given the observed data is equivalent to

Figures (33)

  • Figure 1: Top: Causal model of disparate censorship ($\mathbf{x}$: covariates, $y$: ground truth, $\tilde{y}$: observed label, $t$: testing/labeling indicator, $a$: sensitive attribute). Shaded variables are fully observed. Bottom: Disparate Censorship Expectation-Maximization (DCEM). Dashed nodes are probabilistic estimates.
  • Figure 2: Comparison of ROC gap (left) and AUC (right) of selected models at $q_y = 0.5, k=1, q_t = 2$. Each point represents a different $s_Y$. Our method (DCEM, magenta) mitigates bias while maintaining competitive AUC compared to baselines, with a tighter range and improved empirical worst-case for both metrics. "-": median, "$\bigtriangleup$": worst-case ROC gap, "$\bigtriangledown$": worst-case AUC.
  • Figure 3: Relative frequencies of ROC gaps for DCEM vs. tested-only models at similar AUC (increasing to the right), pooling models across all $k, q_y, q_t$ tested. Dashed lines = mean ROC gap by model. DCEM improves bias mitigation among models with similar AUC.
  • Figure 4: ROC gaps (left) and AUC (right) of baselines and DCEM on sepsis classification task at $q_t=1.5, k=4$. Each dot represents a different $s_T$. Our method (DCEM, magenta) maintains competitive or better bias mitigation and discriminative performance compared to baselines. "-": median, "$\bigtriangleup$": worst-case ROC gap, "$\bigtriangledown$": worst-case AUC.
  • Figure 5: Contour plot of $\hat{y}^{(i)}_{\text{OPT}}(Q(y^{(i)}), \hat{t}^{(i)})$ with respect to $\hat{t}^{(i)}$ ($x$-axis) and $Q(y^{(i)})$ ($y$-axis). $\hat{y}^{(i)}_{\text{OPT}}(Q(y^{(i)}), \hat{t}^{(i)})$ scales with $Q(y^{(i)})$ but decreases in $\hat{t}^{(i)}$.
  • ...and 28 more figures

Theorems & Definitions (30)

  • Theorem 3.1: E-step
  • Theorem 3.2: M-step, informal
  • Definition 3.3: Causal regularization strength, informal
  • Theorem 3.4: informal
  • Proposition 3.5
  • Theorem : E-step derivation
  • proof
  • Remark 2.1
  • Theorem : M-step derivation
  • proof
  • ...and 20 more