Table of Contents
Fetching ...

Random Is Hard to Beat: Active Selection in online DPO with Modern LLMs

Giyeong Oh, Junghyun Lee, Jaehyun Park, Youngjae Yu, Wonho Bae, Junhyug Noh

Abstract

Modern LLMs inherit strong priors from web-scale pretraining, which can limit the headroom of post-training data-selection strategies. While Active Preference Learning (APL) seeks to optimize query efficiency in online Direct Preference Optimization (DPO), the inherent richness of on-policy candidate pools often renders simple Random sampling a surprisingly formidable baseline. We evaluate uncertainty-based APL against Random across harmlessness, helpfulness, and instruction-following settings, utilizing both reward models and LLM-as-a-judge proxies. We find that APL yields negligible improvements in proxy win-rates compared to Random. Crucially, we observe a dissociation where win-rate improves even as general capability -- measured by standard benchmarks -- degrades. APL fails to mitigate this capability collapse or reduce variance significantly better than random sampling. Our findings suggest that in the regime of strong pre-trained priors, the computational overhead of active selection is difficult to justify against the ``cheap diversity'' provided by simple random samples. Our code is available at https://github.com/BootsofLagrangian/random-vs-apl.

Random Is Hard to Beat: Active Selection in online DPO with Modern LLMs

Abstract

Modern LLMs inherit strong priors from web-scale pretraining, which can limit the headroom of post-training data-selection strategies. While Active Preference Learning (APL) seeks to optimize query efficiency in online Direct Preference Optimization (DPO), the inherent richness of on-policy candidate pools often renders simple Random sampling a surprisingly formidable baseline. We evaluate uncertainty-based APL against Random across harmlessness, helpfulness, and instruction-following settings, utilizing both reward models and LLM-as-a-judge proxies. We find that APL yields negligible improvements in proxy win-rates compared to Random. Crucially, we observe a dissociation where win-rate improves even as general capability -- measured by standard benchmarks -- degrades. APL fails to mitigate this capability collapse or reduce variance significantly better than random sampling. Our findings suggest that in the regime of strong pre-trained priors, the computational overhead of active selection is difficult to justify against the ``cheap diversity'' provided by simple random samples. Our code is available at https://github.com/BootsofLagrangian/random-vs-apl.

Paper Structure

This paper contains 29 sections, 3 equations, 3 figures, 9 tables, 1 algorithm.

Figures (3)

  • Figure 1: Harmlessness alignment stability (Pareto frontier). We plot capability change ($\Delta$acc_norm on standard benchmarks) against proxy win-rate for Llama-3.2-3B, Qwen3-1.7B, and Gemma-2B. DeBERTa exhibits the most severe failure mode: despite high win-rates ($>0.7$), policies can suffer large capability collapse ($\Delta$acc_norm$<-10\%$), consistent with proxy over-optimization. Skywork and Beaver show more conservative trade-offs. Across judges, Random sampling (circles) often matches or exceeds the proxy win-rate of APL (squares), with higher variance, suggesting limited marginal benefit from active selection over cheap on-policy diversity.
  • Figure 2: Qwen3-1.7B across datasets and judges.DeBERTa: APL underperforms Random despite comparable or higher proxy win-rates. Skywork: no statistically significant difference between APL and Random. GPT-5-mini: APL performs worse than Random under the same budget.
  • Figure 3: Judge Scaling (GPT-5 Family). We perform online DPO with Qwen2.5-7B using the GPT-5 family as both annotator and evaluator on Ultrafeedback.