Table of Contents
Fetching ...

Learning to Rank Caption Chains for Video-Text Alignment

Ansel Blume, Burak Uzkent, Shalini Chaudhuri, Garin Kessler

Abstract

Direct preference optimization (DPO) is an effective technique to train language models to generate preferred over dispreferred responses. However, this binary "winner-takes-all" approach is suboptimal for vision-language models whose response quality is highly dependent on visual content. In particular, a response may still be faithful to the visual inputs even if it is less preferable than an alternative. The standard Bradley-Terry DPO formulation lacks this nuance, upweighting winning responses without sufficient regard for whether the "losing" response still maintains high visual fidelity. In this work, we investigate ranking optimization as an alternative that more precisely situates responses' faithfulness to visual inputs. We focus on video-text alignment using detailed video captions, proposing a method to generate challenging, totally ordered caption chains at scale through repeated caption degradation. Our results show ranking optimization outperforms binary DPO for long-form content generation and assessment, and importantly, we find that these approaches require finetuning of the vision encoder to be effective, challenging the view of DPO as purely a language-reweighting process.

Learning to Rank Caption Chains for Video-Text Alignment

Abstract

Direct preference optimization (DPO) is an effective technique to train language models to generate preferred over dispreferred responses. However, this binary "winner-takes-all" approach is suboptimal for vision-language models whose response quality is highly dependent on visual content. In particular, a response may still be faithful to the visual inputs even if it is less preferable than an alternative. The standard Bradley-Terry DPO formulation lacks this nuance, upweighting winning responses without sufficient regard for whether the "losing" response still maintains high visual fidelity. In this work, we investigate ranking optimization as an alternative that more precisely situates responses' faithfulness to visual inputs. We focus on video-text alignment using detailed video captions, proposing a method to generate challenging, totally ordered caption chains at scale through repeated caption degradation. Our results show ranking optimization outperforms binary DPO for long-form content generation and assessment, and importantly, we find that these approaches require finetuning of the vision encoder to be effective, challenging the view of DPO as purely a language-reweighting process.

Paper Structure

This paper contains 25 sections, 4 equations, 5 figures, 8 tables.

Figures (5)

  • Figure 1: Existing models struggle to differentiate between visual details in long-form and detailed captions. We propose caption ranking to capture fine-grained differences for video-language alignment.
  • Figure 2: Our proposed framework. Given a set of high-quality video captions, an LLM generates a totally ordered caption chain by repeatedly introducing visually-grounded errors into the previous caption. These caption chains are then used to train VLMs by having them rank the generated captions using ranking-based DPO. The captions' similarity forces the models to pay attention to fine, visually-grounded details.
  • Figure 3: Error types used to generate caption chains through repeated caption mutation.
  • Figure 4: Performance of Qwen2.5-VL and PLM w.r.t number of captions utilized in the chain. Model performance generally increases with chain length.
  • Figure 5: Qualitative comparison of our Ranking model vs. an SFT baseline. Green highlights video-grounded descriptions. The Ranking model captures finer details better than SFT. See Appendix for more in-depth comparison and examples.