Table of Contents
Fetching ...

Many Preferences, Few Policies: Towards Scalable Language Model Personalization

Cheol Woo Kum, Jai Moondra, Roozbeh Nahavandi, Andrew Perrault, Milind Tambe, Swati Gupta

Abstract

The holy grail of LLM personalization is a single LLM for each user, perfectly aligned with that user's preferences. However, maintaining a separate LLM per user is impractical due to constraints on compute, memory, and system complexity. We address this challenge by developing a principled method for selecting a small portfolio of LLMs that captures representative behaviors across heterogeneous users. We model user preferences across multiple traits (e.g., safety, humor, brevity) through a multi-dimensional weight vector. Given reward functions across these dimensions, our algorithm PALM (Portfolio of Aligned LLMs) generates a small portfolio of LLMs such that, for any weight vector, the portfolio contains a near-optimal LLM for the corresponding scalarized objective. To the best of our knowledge, this is the first result that provides theoretical guarantees on both the size and approximation quality of LLM portfolios for personalization. It characterizes the trade-off between system cost and personalization, as well as the diversity of LLMs required to cover the landscape of user preferences. We provide empirical results that validate these guarantees and demonstrate greater output diversity over common baselines.

Many Preferences, Few Policies: Towards Scalable Language Model Personalization

Abstract

The holy grail of LLM personalization is a single LLM for each user, perfectly aligned with that user's preferences. However, maintaining a separate LLM per user is impractical due to constraints on compute, memory, and system complexity. We address this challenge by developing a principled method for selecting a small portfolio of LLMs that captures representative behaviors across heterogeneous users. We model user preferences across multiple traits (e.g., safety, humor, brevity) through a multi-dimensional weight vector. Given reward functions across these dimensions, our algorithm PALM (Portfolio of Aligned LLMs) generates a small portfolio of LLMs such that, for any weight vector, the portfolio contains a near-optimal LLM for the corresponding scalarized objective. To the best of our knowledge, this is the first result that provides theoretical guarantees on both the size and approximation quality of LLM portfolios for personalization. It characterizes the trade-off between system cost and personalization, as well as the diversity of LLMs required to cover the landscape of user preferences. We provide empirical results that validate these guarantees and demonstrate greater output diversity over common baselines.

Paper Structure

This paper contains 21 sections, 4 theorems, 14 equations, 3 figures, 6 tables, 1 algorithm.

Key Result

Lemma 3.2

Consider weight vectors $w, v \in \mathbf{\Delta}_d$ that satisfy $|w_i - v_i| \le \varepsilon' v_i + \delta'$ for all coordinates $i \in [d]$ for some $\varepsilon', \delta' \ge 0$. Then, $\pi_w$ is a $(2\varepsilon', 2(\delta' R_{\max} + \varepsilon' f_{\max}))$-approximation for $v$.

Figures (3)

  • Figure 1: Comparison of weight selection methods in $\{w \in \mathbb{R}^3 : w_1 + w_2 + w_3 = 1,\; w \geq 0\}$. Shown are weights obtained by random sampling (left), a uniform grid (center), and PALM (right), each using 48 weight vectors. Each point corresponds to a selected weight vector $w$, and the surrounding cell represents the set of weight vectors $v$ approximated by $w$, defined by $|w_i - v_i| \le \varepsilon' v_i + \delta'$ with $(\varepsilon', \delta') = (2/5,\, 1/80)$. Random sampling and the uniform grid leave regions of the simplex uncovered, whereas PALM achieves full coverage.
  • Figure 2: An example of a multiplicative, additive, and a combined grid in $d = 2$ dimensions.
  • Figure 3: Policy usage distribution of the size-5 uniform portfolio (top) and our size-5 portfolio (bottom) on the Safety Alignment task. Each color denotes the policy selected as best for a given weight. Our portfolio exhibits more balanced usage, whereas two of the five policies in the uniform baseline are never selected.

Theorems & Definitions (8)

  • Definition 3.1: $(\varepsilon, \delta)$-approximation
  • Definition 3.2: Portfolio
  • Lemma 3.2
  • Theorem 3.3
  • Lemma A.1
  • proof
  • Lemma A.1
  • proof : Proof of Lemma \ref{['lem: approximate-vectors-lead-to-approximate-policies']}