Table of Contents
Fetching ...

GroundVTS: Visual Token Sampling in Multimodal Large Language Models for Video Temporal Grounding

Rong Fan, Kaiyan Xiao, Minghao Zhu, Liuyi Wang, Kai Dai, Zhao Yang

Abstract

Video temporal grounding (VTG) is a critical task in video understanding and a key capability for extending video large language models (Vid-LLMs) to broader applications. However, existing Vid-LLMs rely on uniform frame sampling to extract video information, resulting in a sparse distribution of key frames and the loss of crucial temporal cues. To address this limitation, we propose Grounded Visual Token Sampling (GroundVTS), a Vid-LLM architecture that focuses on the most informative temporal segments. GroundVTS employs a fine-grained, query-guided mechanism to filter visual tokens before feeding them into the LLM, thereby preserving essential spatio-temporal information and maintaining temporal coherence. Futhermore, we introduce a progressive optimization strategy that enables the LLM to effectively adapt to the non-uniform distribution of visual features, enhancing its ability to model temporal dependencies and achieve precise video localization. We comprehensively evaluate GroundVTS on three standard VTG benchmarks, where it outperforms existing methods, achieving a 7.7-point improvement in mIoU for moment retrieval and 12.0-point improvement in mAP for highlight detection. Code is available at https://github.com/Florence365/GroundVTS.

GroundVTS: Visual Token Sampling in Multimodal Large Language Models for Video Temporal Grounding

Abstract

Video temporal grounding (VTG) is a critical task in video understanding and a key capability for extending video large language models (Vid-LLMs) to broader applications. However, existing Vid-LLMs rely on uniform frame sampling to extract video information, resulting in a sparse distribution of key frames and the loss of crucial temporal cues. To address this limitation, we propose Grounded Visual Token Sampling (GroundVTS), a Vid-LLM architecture that focuses on the most informative temporal segments. GroundVTS employs a fine-grained, query-guided mechanism to filter visual tokens before feeding them into the LLM, thereby preserving essential spatio-temporal information and maintaining temporal coherence. Futhermore, we introduce a progressive optimization strategy that enables the LLM to effectively adapt to the non-uniform distribution of visual features, enhancing its ability to model temporal dependencies and achieve precise video localization. We comprehensively evaluate GroundVTS on three standard VTG benchmarks, where it outperforms existing methods, achieving a 7.7-point improvement in mIoU for moment retrieval and 12.0-point improvement in mAP for highlight detection. Code is available at https://github.com/Florence365/GroundVTS.

Paper Structure

This paper contains 28 sections, 7 equations, 8 figures, 16 tables.

Figures (8)

  • Figure 1: Comparison of sampling strategies for Vid-LLMs. (a) Uniform sampling distributes attention evenly across frames, often missing query-relevant moments; (b) existing query-guided frame sampling relies on external encoders for coarse frame selection, limiting temporal understanding; and (c) our query-guided VTS adaptively selects query-relevant visual tokens within Vid-LLMs while maintaining temporal coherence, enabling efficient and precise video grounding.
  • Figure 2: Frame rate sensitivity of Qwen2.5VL-7B. Similar trends hold for InternVL3.5 (illustrated in the supplementary material).
  • Figure 3: Overview of the proposed GroundVTS framework. (a) GroundVTS integrates a query-guided VTS module into the Vid-LLM pipeline, enabling adaptive selection of query-relevant tokens; (b) The VTS module computes token-query similarity scores and performs weighted differentiable top-$K$ sampling to retain the most informative tokens, supporting efficient and precise video temporal grounding.
  • Figure 4: Comparison between GroundVTS-Q and Qwen2.5VL-7B-G (denoted as QwenVL-G) under varying token densities.
  • Figure 5: Qualitative comparison of temporal grounding predictions among GroundVTS-Q, Qwen2.5VL-7B-G, and Qwen2.5VL-7B.
  • ...and 3 more figures