Table of Contents
Fetching ...

MLLM as Video Narrator: Mitigating Modality Imbalance in Video Moment Retrieval

Weitong Cai, Jiabo Huang, Shaogang Gong, Hailin Jin, Yang Liu

TL;DR

A multi-modal large language models (MLLM) is taken as a video narrator to generate plausible textual descriptions of the video, thereby mitigating the modality imbalance and boosting the temporal localization.

Abstract

Video Moment Retrieval (VMR) aims to localize a specific temporal segment within an untrimmed long video given a natural language query. Existing methods often suffer from inadequate training annotations, i.e., the sentence typically matches with a fraction of the prominent video content in the foreground with limited wording diversity. This intrinsic modality imbalance leaves a considerable portion of visual information remaining unaligned with text. It confines the cross-modal alignment knowledge within the scope of a limited text corpus, thereby leading to sub-optimal visual-textual modeling and poor generalizability. By leveraging the visual-textual understanding capability of multi-modal large language models (MLLM), in this work, we take an MLLM as a video narrator to generate plausible textual descriptions of the video, thereby mitigating the modality imbalance and boosting the temporal localization. To effectively maintain temporal sensibility for localization, we design to get text narratives for each certain video timestamp and construct a structured text paragraph with time information, which is temporally aligned with the visual content. Then we perform cross-modal feature merging between the temporal-aware narratives and corresponding video temporal features to produce semantic-enhanced video representation sequences for query localization. Subsequently, we introduce a uni-modal narrative-query matching mechanism, which encourages the model to extract complementary information from contextual cohesive descriptions for improved retrieval. Extensive experiments on two benchmarks show the effectiveness and generalizability of our proposed method.

MLLM as Video Narrator: Mitigating Modality Imbalance in Video Moment Retrieval

TL;DR

A multi-modal large language models (MLLM) is taken as a video narrator to generate plausible textual descriptions of the video, thereby mitigating the modality imbalance and boosting the temporal localization.

Abstract

Video Moment Retrieval (VMR) aims to localize a specific temporal segment within an untrimmed long video given a natural language query. Existing methods often suffer from inadequate training annotations, i.e., the sentence typically matches with a fraction of the prominent video content in the foreground with limited wording diversity. This intrinsic modality imbalance leaves a considerable portion of visual information remaining unaligned with text. It confines the cross-modal alignment knowledge within the scope of a limited text corpus, thereby leading to sub-optimal visual-textual modeling and poor generalizability. By leveraging the visual-textual understanding capability of multi-modal large language models (MLLM), in this work, we take an MLLM as a video narrator to generate plausible textual descriptions of the video, thereby mitigating the modality imbalance and boosting the temporal localization. To effectively maintain temporal sensibility for localization, we design to get text narratives for each certain video timestamp and construct a structured text paragraph with time information, which is temporally aligned with the visual content. Then we perform cross-modal feature merging between the temporal-aware narratives and corresponding video temporal features to produce semantic-enhanced video representation sequences for query localization. Subsequently, we introduce a uni-modal narrative-query matching mechanism, which encourages the model to extract complementary information from contextual cohesive descriptions for improved retrieval. Extensive experiments on two benchmarks show the effectiveness and generalizability of our proposed method.

Paper Structure

This paper contains 12 sections, 17 equations, 5 figures, 5 tables.

Figures (5)

  • Figure 1: An illustration of the intrinsic modality imbalance problem in video-query samples. (a) Query in existing datasets solely captures a fraction of the prominent video content (semantic completeness) in the foreground with the limited wording diversity, leaving a significant amount of visual information unaligned with text. (b) We leverage an MLLM as a video narrator to generate structured narratives temporally aligned with the corresponding video, to enhance the cross-modal understanding with the rich text corpus to facilitate more accurate and generalized predictions.
  • Figure 2: An overview of our Text-Enhanced Alignment (Text-Enhanced Alignment) model. We take an offline MLLM as a video narrator to generate a structured narrative paragraph $C^a$ that is temporally aligned with the input video snippet feature sequences $V$. Text-Enhanced Alignment performs video-narrative knowledge enhancement to acquire more discriminative text-enhanced video representations. Parallelly, we conduct a paragraph-query interaction module to complement context understanding and promote more generalizable predictions.
  • Figure 3: Components analysis
  • Figure 4: Combined weight
  • Figure 5: Qualitative example on Charades. The structured narratives provide guidance ('A man wearing a pink shirt' vs. 'A woman in a blue dress') to help the model understand who is the 'another person' and get more accurate predictions.