Table of Contents
Fetching ...

Reducing Oracle Feedback with Vision-Language Embeddings for Preference-Based RL

Udita Ghosh, Dripta S. Raychaudhuri, Jiachen Li, Konstantinos Karydis, Amit Roy-Chowdhury

Abstract

Preference-based reinforcement learning can learn effective reward functions from comparisons, but its scalability is constrained by the high cost of oracle feedback. Lightweight vision-language embedding (VLE) models provide a cheaper alternative, but their noisy outputs limit their effectiveness as standalone reward generators. To address this challenge, we propose ROVED, a hybrid framework that combines VLE-based supervision with targeted oracle feedback. Our method uses the VLE to generate segment-level preferences and defers to an oracle only for samples with high uncertainty, identified through a filtering mechanism. In addition, we introduce a parameter-efficient fine-tuning method that adapts the VLE with the obtained oracle feedback in order to improve the model over time in a synergistic fashion. This ensures the retention of the scalability of embeddings and the accuracy of oracles, while avoiding their inefficiencies. Across multiple robotic manipulation tasks, ROVED matches or surpasses prior preference-based methods while reducing oracle queries by up to 80%. Remarkably, the adapted VLE generalizes across tasks, yielding cumulative annotation savings of up to 90%, highlighting the practicality of combining scalable embeddings with precise oracle supervision for preference-based RL.

Reducing Oracle Feedback with Vision-Language Embeddings for Preference-Based RL

Abstract

Preference-based reinforcement learning can learn effective reward functions from comparisons, but its scalability is constrained by the high cost of oracle feedback. Lightweight vision-language embedding (VLE) models provide a cheaper alternative, but their noisy outputs limit their effectiveness as standalone reward generators. To address this challenge, we propose ROVED, a hybrid framework that combines VLE-based supervision with targeted oracle feedback. Our method uses the VLE to generate segment-level preferences and defers to an oracle only for samples with high uncertainty, identified through a filtering mechanism. In addition, we introduce a parameter-efficient fine-tuning method that adapts the VLE with the obtained oracle feedback in order to improve the model over time in a synergistic fashion. This ensures the retention of the scalability of embeddings and the accuracy of oracles, while avoiding their inefficiencies. Across multiple robotic manipulation tasks, ROVED matches or surpasses prior preference-based methods while reducing oracle queries by up to 80%. Remarkably, the adapted VLE generalizes across tasks, yielding cumulative annotation savings of up to 90%, highlighting the practicality of combining scalable embeddings with precise oracle supervision for preference-based RL.

Paper Structure

This paper contains 13 sections, 9 equations, 8 figures.

Figures (8)

  • Figure 2: Overview of our approach. Given a task description, ROVED iteratively updates the policy $\pi_\phi$ via reinforcement learning using the reward model $r_\theta$. Trajectory segments from the replay buffer are sampled and labeled with VLE-generated preferences. These samples are then classified as clean or noisy using thresholds $\tau_{upper}$ and $\tau_{lower}$. A budgeted subset of noisy samples is sent for oracle annotation. The reward model is trained on both VLE and oracle-labeled preferences, while the VLE is fine-tuned using oracle annotations and replay buffer samples.
  • Figure 3: Tasks. An illustration of the different manipulation tasks from Meta-World on which we evaluate our approach.
  • Figure 4: Improved feedback efficiency.ROVED consistently outperforms all baselines with minimal oracle feedback, matching or exceeding PEBBLE’s performance while requiring 50%-80% fewer annotations. At equal preference counts, ROVED also outperforms MRN and SURF. Variables (x, y, z) denote the number of oracle preferences used.
  • Figure 5: Knowledge transfer across tasks. With knowledge transfer, ROVED matches or surpasses PEBBLE while reducing annotation requirements by 75–90%. This demonstrates effective transfer in both same task, different object (left) and same object, different task (right) settings. Variables (w, x, y, z) denote the number of preferences used.
  • Figure 6: Experiments with VLM oracles.ROVED achieves comparable or better performance than RL-VLM-F while using $50\%$ fewer oracle preferences (denoted by the number in brackets), demonstrating its ability to generalize across different PbRL algorithms. Experiments are reported on a subset of environments due to the high API cost of VLMs.
  • ...and 3 more figures