WARP: On the Benefits of Weight Averaged Rewarded Policies
Alexandre Ramé, Johan Ferret, Nino Vieillard, Robert Dadashi, Léonard Hussenot, Pierre-Louis Cedoz, Pier Giuseppe Sessa, Sertan Girgin, Arthur Douillard, Olivier Bachem
TL;DR
This work addresses the tension in RLHF between maximizing reward and preserving pretraining knowledge by formalizing the $\,\mathrm{KL}\,$-$\$\mathrm{reward}$ Pareto front and introducing Weight Averaged Rewarded Policies (WARP). WARP combines three weight-averaging operations—an EMA anchor for KL regularization, SLERP-based merging of independently fine-tuned policies, and LITI interpolation toward initialization—applied iteratively to progressively refine the frontier. Empirical results on the Gemma 7B RLHF pipeline show that WARP yields higher rewards at fixed KL and outperforms open-source baselines on a range of benchmarks, including mathematics tasks, albeit at higher compute cost due to multiple RL runs per iteration. The approach connects to distributed learning and iterated amplification concepts, offering a scalable post-training alignment technique that preserves knowledge while enhancing alignment quality.
Abstract
Reinforcement learning from human feedback (RLHF) aligns large language models (LLMs) by encouraging their generations to have high rewards, using a reward model trained on human preferences. To prevent the forgetting of pre-trained knowledge, RLHF usually incorporates a KL regularization; this forces the policy to remain close to its supervised fine-tuned initialization, though it hinders the reward optimization. To tackle the trade-off between KL and reward, in this paper we introduce a novel alignment strategy named Weight Averaged Rewarded Policies (WARP). WARP merges policies in the weight space at three distinct stages. First, it uses the exponential moving average of the policy as a dynamic anchor in the KL regularization. Second, it applies spherical interpolation to merge independently fine-tuned policies into a new enhanced one. Third, it linearly interpolates between this merged model and the initialization, to recover features from pre-training. This procedure is then applied iteratively, with each iteration's final model used as an advanced initialization for the next, progressively refining the KL-reward Pareto front, achieving superior rewards at fixed KL. Experiments with GEMMA policies validate that WARP improves their quality and alignment, outperforming other open-source LLMs.
