Table of Contents
Fetching ...

Uncertainty-Aware Variational Reward Factorization via Probabilistic Preference Bases for LLM Personalization

Gyuseok Lee, Wonbin Kweon, Zhenrui Yue, SeongKu Kang, Jiawei Han, Dong Wang

Abstract

Reward factorization personalizes large language models (LLMs) by decomposing rewards into shared basis functions and user-specific weights. Yet, existing methods estimate user weights from scarce data in isolation and as deterministic points, leading to inaccurate and unreliable inference. We introduce Variational Reward Factorization (VRF), an uncertainty-aware framework that represents each user's preferences as a variational distribution in a shared preference space. VRF infers user distributions via a variational encoder, derives weights through Wasserstein distance matching with shared probabilistic bases, and downweights uncertain estimates through a variance-attenuated loss. On three benchmarks, VRF outperforms all baselines across seen and unseen users, few-shot scenarios, and varying uncertainty levels, with gains extending to downstream alignment.

Uncertainty-Aware Variational Reward Factorization via Probabilistic Preference Bases for LLM Personalization

Abstract

Reward factorization personalizes large language models (LLMs) by decomposing rewards into shared basis functions and user-specific weights. Yet, existing methods estimate user weights from scarce data in isolation and as deterministic points, leading to inaccurate and unreliable inference. We introduce Variational Reward Factorization (VRF), an uncertainty-aware framework that represents each user's preferences as a variational distribution in a shared preference space. VRF infers user distributions via a variational encoder, derives weights through Wasserstein distance matching with shared probabilistic bases, and downweights uncertain estimates through a variance-attenuated loss. On three benchmarks, VRF outperforms all baselines across seen and unseen users, few-shot scenarios, and varying uncertainty levels, with gains extending to downstream alignment.

Paper Structure

This paper contains 55 sections, 2 theorems, 18 equations, 12 figures, 5 tables.

Key Result

Proposition 1

$\sigma_\Delta^2 = \sum_{k=1}^K w_{u,k}(\Delta\phi_k - \mu_\Delta)^2$ is concave with respect to $\mathbf{w}_u$. $\blacktriangleleft$$\blacktriangleleft$

Figures (12)

  • Figure 1: Motivating example. Preferences can be consistent (certain) or diverse (uncertain).
  • Figure 2: Overall framework of VRF. Best viewed in color.
  • Figure 3: Few-shot adaptation (Varying $|\mathcal{C}_u|$)
  • Figure 4: Uncertainty robustness
  • Figure 5: Inference-time alignment (grouped by $|\mathcal{C}_u|$)
  • ...and 7 more figures

Theorems & Definitions (4)

  • Proposition 1
  • proof
  • Proposition 2
  • proof