Table of Contents
Fetching ...

WHBench: Evaluating Frontier LLMs with Expert-in-the-Loop Validation on Women's Health Topics

Sneha Maurya, Pragya Saboo, Girish Kumar

Abstract

Large language models are increasingly used for medical guidance, but women's health remains under-evaluated in benchmark design. We present the Women's Health Benchmark (WHBench), a targeted evaluation suite of 47 expert-crafted scenarios across 10 women's health topics, designed to expose clinically meaningful failure modes including outdated guidelines, unsafe omissions, dosing errors, and equity-related blind spots. We evaluate 22 models using a 23-criterion rubric spanning clinical accuracy, completeness, safety, communication quality, instruction following, equity, uncertainty handling, and guideline adherence, with safety-weighted penalties and server-side score recalculation. Across 3,102 attempted responses (3,100 scored), no model mean performance exceeds 75 percent; the best model reaches 72.1 percent. Even top models show low fully correct rates and substantial variation in harm rates. Inter-rater reliability is moderate at the response label level but high for model ranking, supporting WHBench utility for comparative system evaluation while highlighting the need for expert oversight in clinical deployment. WHBench provides a public, failure-mode-aware benchmark to track safer and more equitable progress in womens health AI.

WHBench: Evaluating Frontier LLMs with Expert-in-the-Loop Validation on Women's Health Topics

Abstract

Large language models are increasingly used for medical guidance, but women's health remains under-evaluated in benchmark design. We present the Women's Health Benchmark (WHBench), a targeted evaluation suite of 47 expert-crafted scenarios across 10 women's health topics, designed to expose clinically meaningful failure modes including outdated guidelines, unsafe omissions, dosing errors, and equity-related blind spots. We evaluate 22 models using a 23-criterion rubric spanning clinical accuracy, completeness, safety, communication quality, instruction following, equity, uncertainty handling, and guideline adherence, with safety-weighted penalties and server-side score recalculation. Across 3,102 attempted responses (3,100 scored), no model mean performance exceeds 75 percent; the best model reaches 72.1 percent. Even top models show low fully correct rates and substantial variation in harm rates. Inter-rater reliability is moderate at the response label level but high for model ranking, supporting WHBench utility for comparative system evaluation while highlighting the need for expert oversight in clinical deployment. WHBench provides a public, failure-mode-aware benchmark to track safer and more equitable progress in womens health AI.

Paper Structure

This paper contains 39 sections, 3 figures, 6 tables.

Figures (3)

  • Figure 1: Model performance on WHBench v3.0. Mean normalized score (%) with 95% bootstrap confidence intervals ($n{=}10{,}000$). The dashed line marks the 80% threshold for "Correct" classification. Claude Opus 4.6 comes close at 72.1%; most frontier models cluster in the low to mid 60s.
  • Figure 2: Model Safety performance on WHBench v3.0. Mean normalized score (%) with Safety Category Mean Pass rate(%) ($n{=}10{,}000$). The dashed line marks the median overall score on x-axis and median safety on y-axis. Only two models - Claude Opus 4.6 and Claude Sonnet 4.6 passed the safety ; rest of the latest SOTA models cluster in 80 to 90% band.
  • Figure 3: Model $\times$ topic performance heatmap (mean normalized score %). Darker shading indicates higher scores. Pregnancy , Cancer Screening and Hormonal Health show high cross-model variance; Contraception is uniformly difficult.