Table of Contents
Fetching ...

EagleNet: Energy-Aware Fine-Grained Relationship Learning Network for Text-Video Retrieval

Yuhan Chen, Pengwen Dai, Chuan Wang, Dayan Wu, Xiaochun Cao

Abstract

Text-video retrieval tasks have seen significant improvements due to the recent development of large-scale vision-language pre-trained models. Traditional methods primarily focus on video representations or cross-modal alignment, while recent works shift toward enriching text expressiveness to better match the rich semantics in videos. However, these methods use only interactions between text and frames/video, and ignore rich interactions among the internal frames within a video, so the final expanded text cannot capture frame contextual information, leading to disparities between text and video. In response, we introduce Energy-Aware Fine-Grained Relationship Learning Network (EagleNet) to generate accurate and context-aware enriched text embeddings. Specifically, the proposed Fine-Grained Relationship Learning mechanism (FRL) first constructs a text-frame graph by the generated text candidates and frames, then learns relationships among texts and frames, which are finally used to aggregate text candidates into an enriched text embedding that incorporates frame contextual information. To further improve fine-grained relationship learning in FRL, we design Energy-Aware Matching (EAM) to model the energy of text-frame interactions and thus accurately capture the distribution of real text-video pairs. Moreover, for more effective cross-modal alignment and stable training, we replace the conventional softmax-based contrastive loss with the sigmoid loss. Extensive experiments have demonstrated the superiority of EagleNet across MSRVTT, DiDeMo, MSVD, and VATEX. Codes are available at https://github.com/draym28/EagleNet.

EagleNet: Energy-Aware Fine-Grained Relationship Learning Network for Text-Video Retrieval

Abstract

Text-video retrieval tasks have seen significant improvements due to the recent development of large-scale vision-language pre-trained models. Traditional methods primarily focus on video representations or cross-modal alignment, while recent works shift toward enriching text expressiveness to better match the rich semantics in videos. However, these methods use only interactions between text and frames/video, and ignore rich interactions among the internal frames within a video, so the final expanded text cannot capture frame contextual information, leading to disparities between text and video. In response, we introduce Energy-Aware Fine-Grained Relationship Learning Network (EagleNet) to generate accurate and context-aware enriched text embeddings. Specifically, the proposed Fine-Grained Relationship Learning mechanism (FRL) first constructs a text-frame graph by the generated text candidates and frames, then learns relationships among texts and frames, which are finally used to aggregate text candidates into an enriched text embedding that incorporates frame contextual information. To further improve fine-grained relationship learning in FRL, we design Energy-Aware Matching (EAM) to model the energy of text-frame interactions and thus accurately capture the distribution of real text-video pairs. Moreover, for more effective cross-modal alignment and stable training, we replace the conventional softmax-based contrastive loss with the sigmoid loss. Extensive experiments have demonstrated the superiority of EagleNet across MSRVTT, DiDeMo, MSVD, and VATEX. Codes are available at https://github.com/draym28/EagleNet.

Paper Structure

This paper contains 45 sections, 42 equations, 5 figures, 9 tables, 1 algorithm.

Figures (5)

  • Figure 1: The performance-efficiency comparison among SOTA models (ViT-B/16) on MSRVTT datasets.
  • Figure 2: Diagram of EagleNet. (a) Overview of the training process. (b) Fine-Grained Relationship Learning (FRL) first samples multiple text candidates, then constructs a text-frame graph to learn both text-frame and frame-frame relationships, which are then used to aggregate text candidates into a final enriched text embedding aware of video context information. (c) Energy-Aware Matching (EAM) improves the relationships learning in FRL by capturing detailed text-frame interactions and accurately models true text-video pairs distribution, thereby also enhancing the final matching performance from a fine-grained perspective.
  • Figure 3: Visualization of EagleNet, TMASS, and TV-ProxyNet.
  • Figure 4: Visualization of text-to-video retrieval results by our proposed EagleNet, TMASS wang2024text, and TV-ProxyNet xiao2025text. Green denotes the correct retrieval result, and red is the wrong result.
  • Figure 5: Visualization of text-to-video retrieval results by different components of EagleNet. Green denotes the correct retrieval result, and red is the wrong result.