Table of Contents
Fetching ...

Design-Based Inference for the AUC with Complex Survey Data

Amaia Iparragirre, Thomas Lumley, Irantzu Barrio

Abstract

Complex survey data are usually collected following complex sampling designs. Accounting for the sampling design is essential to obtain unbiased estimates and valid inferences when analyzing complex survey data. The area under the receiver operating characteristic curve (AUC) is routinely used to assess the discriminative ability of predictive models for binary outcomes. However, valid inference for the AUC under complex sampling designs remains challenging. Although bootstrap techniques are widely applied under simple random sampling for variance estimation in this framework, traditional implementations do not account for complex designs. In this work, we propose a design-based framework for AUC inference. In particular, replicate weights methods are used to construct confidence intervals and hypothesis tests. The performance of replicate weights methods and the traditional non-design-based bootstrap for this purpose has been analyzed through an extensive simulation study. Design-based methods achieve coverage probabilities close to nominal levels and appropriate rejection rates under the null hypothesis. In contrast, the traditional non-design-based bootstrap method tends to underestimate the variance, leading to undercoverage and inflated rejection rates. Differences between methods decrease as the number of selected clusters per stratum increases. An application to data from the National Health and Nutrition Examination Survey (NHANES) illustrates the practical relevance of the proposed framework. The methods have been incorporated into the svyROC R package.

Design-Based Inference for the AUC with Complex Survey Data

Abstract

Complex survey data are usually collected following complex sampling designs. Accounting for the sampling design is essential to obtain unbiased estimates and valid inferences when analyzing complex survey data. The area under the receiver operating characteristic curve (AUC) is routinely used to assess the discriminative ability of predictive models for binary outcomes. However, valid inference for the AUC under complex sampling designs remains challenging. Although bootstrap techniques are widely applied under simple random sampling for variance estimation in this framework, traditional implementations do not account for complex designs. In this work, we propose a design-based framework for AUC inference. In particular, replicate weights methods are used to construct confidence intervals and hypothesis tests. The performance of replicate weights methods and the traditional non-design-based bootstrap for this purpose has been analyzed through an extensive simulation study. Design-based methods achieve coverage probabilities close to nominal levels and appropriate rejection rates under the null hypothesis. In contrast, the traditional non-design-based bootstrap method tends to underestimate the variance, leading to undercoverage and inflated rejection rates. Differences between methods decrease as the number of selected clusters per stratum increases. An application to data from the National Health and Nutrition Examination Survey (NHANES) illustrates the practical relevance of the proposed framework. The methods have been incorporated into the svyROC R package.

Paper Structure

This paper contains 17 sections, 27 equations, 2 figures, 5 tables.

Figures (2)

  • Figure 1: Graphical summary of the simulation set-up for confidence intervals.
  • Figure 2: Graphical summary of the simulation set-up for hypothesis tests. The left figure illustrates the process followed for the comparison of two independent AUCs, while the right figure depicts the paired AUC comparison.