Table of Contents
Fetching ...

Graph-to-Frame RAG: Visual-Space Knowledge Fusion for Training-Free and Auditable Video Reasoning

Songyuan Yang, Weijiang Yu, Ziyu Liu, Guijian Tang, Wenjing Yang, Huibin Tan, Nong Xiao

Abstract

When video reasoning requires external knowledge, many systems with large multimodal models (LMMs) adopt retrieval augmentation to supply the missing context. Appending textual or multi-clip evidence, however, forces heterogeneous signals into a single attention space. We observe diluted attention and higher cognitive load even on non-long videos. The bottleneck is not only what to retrieve but how to represent and fuse external knowledge with the video backbone.We present Graph-to-Frame RAG (G2F-RAG), a training free and auditable paradigm that delivers knowledge in the visual space. On the offline stage, an agent builds a problem-agnostic video knowledge graph that integrates entities, events, spatial relations, and linked world knowledge. On the online stage, a hierarchical multi-agent controller decides whether external knowledge is needed, retrieves a minimal sufficient subgraph, and renders it as a single reasoning frame appended to the video. LMMs then perform joint reasoning in a unified visual domain. This design reduces cognitive load and leaves an explicit, inspectable evidence trail.G2F-RAG is plug-and-play across backbones and scales. It yields consistent gains on diverse public benchmarks, with larger improvements in knowledge-intensive settings. Ablations further confirm that knowledge representation and delivery matter. G2F-RAG reframes retrieval as visual space knowledge fusion for robust and interpretable video reasoning.

Graph-to-Frame RAG: Visual-Space Knowledge Fusion for Training-Free and Auditable Video Reasoning

Abstract

When video reasoning requires external knowledge, many systems with large multimodal models (LMMs) adopt retrieval augmentation to supply the missing context. Appending textual or multi-clip evidence, however, forces heterogeneous signals into a single attention space. We observe diluted attention and higher cognitive load even on non-long videos. The bottleneck is not only what to retrieve but how to represent and fuse external knowledge with the video backbone.We present Graph-to-Frame RAG (G2F-RAG), a training free and auditable paradigm that delivers knowledge in the visual space. On the offline stage, an agent builds a problem-agnostic video knowledge graph that integrates entities, events, spatial relations, and linked world knowledge. On the online stage, a hierarchical multi-agent controller decides whether external knowledge is needed, retrieves a minimal sufficient subgraph, and renders it as a single reasoning frame appended to the video. LMMs then perform joint reasoning in a unified visual domain. This design reduces cognitive load and leaves an explicit, inspectable evidence trail.G2F-RAG is plug-and-play across backbones and scales. It yields consistent gains on diverse public benchmarks, with larger improvements in knowledge-intensive settings. Ablations further confirm that knowledge representation and delivery matter. G2F-RAG reframes retrieval as visual space knowledge fusion for robust and interpretable video reasoning.

Paper Structure

This paper contains 14 sections, 3 equations, 5 figures, 3 tables.

Figures (5)

  • Figure 1: Traditional Video RAG appends auxiliary text, mixing heterogeneous tokens in one attention space, While G2F-RAG renders retrieved knowledge as a single reasoning frame appended to the video, keeping evidence in the visual space and producing more grounded causal answers.
  • Figure 2: (a) Attention Analysis. Text-append Video-RAG diverts attention from key frames toward contextual tokens, while G2F-RAG keeps focus on key frames and the appended graph frame. (b) Comparison of the performance. Video-RAG causes consistent drops, whereas G2F-RAG yields clear gains over the baseline on all three benchmarks.
  • Figure 3: Method overview. Offline: a graph-construction agent builds a problem-agnostic video knowledge graph with optional external knowledge. Online: an orchestration agent routes the query; a retrieval agent extracts a minimal subgraph $S^*$; a rendering agent converts it into one reasoning frame and appends it to the video to form $\tilde{V}=[V; I_{\mathrm{RF}}]$. Finally, the LMM answers from $V$ for easy cases, or from $\tilde{V}$ for hard cases. The pipeline is training free, auditable, and fuses knowledge in the visual space.
  • Figure 4: G2F-RAG pipline. Build a full video graph, retrieve a minimal subgraph for the query, render it as a single frame, and append it to the video for visual-space reasoning with the LMM.
  • Figure 5: Case study. We builds an offline full graph, retrieves a compact subgraph online, renders one reasoning frame, and appends it to the video for reasoning. Further analysis show attention concentrates on key video frames and the graph frame.