Table of Contents
Fetching ...

SGTA: Scene-Graph Based Multi-Modal Traffic Agent for Video Understanding

Xingcheng Zhou, Mingyu Liu, Walter Zimmer, Jiajie Zhang, Alois Knoll

Abstract

We present Scene-Graph Based Multi-Modal Traffic Agent (SGTA), a modular framework for traffic video understanding that combines structured scene graphs with multi-modal reasoning. It constructs a traffic scene graph from roadside videos using detection, tracking, and lane extraction, followed by tool-based reasoning over both symbolic graph queries and visual inputs. SGTA adopts ReAct to process interleaved reasoning traces from large language models with tool invocations, enabling interpretable decision-making for complex video questions. Experiments on selected TUMTraffic VideoQA dataset sample demonstrate that SGTA achieves competitive accuracy across multiple question types while providing transparent reasoning steps. These results highlight the potential of integrating structured scene representations with multi-modal agents for traffic video understanding.

SGTA: Scene-Graph Based Multi-Modal Traffic Agent for Video Understanding

Abstract

We present Scene-Graph Based Multi-Modal Traffic Agent (SGTA), a modular framework for traffic video understanding that combines structured scene graphs with multi-modal reasoning. It constructs a traffic scene graph from roadside videos using detection, tracking, and lane extraction, followed by tool-based reasoning over both symbolic graph queries and visual inputs. SGTA adopts ReAct to process interleaved reasoning traces from large language models with tool invocations, enabling interpretable decision-making for complex video questions. Experiments on selected TUMTraffic VideoQA dataset sample demonstrate that SGTA achieves competitive accuracy across multiple question types while providing transparent reasoning steps. These results highlight the potential of integrating structured scene representations with multi-modal agents for traffic video understanding.

Paper Structure

This paper contains 12 sections, 2 figures, 2 tables, 1 algorithm.

Figures (2)

  • Figure 1: Comparison of paradigms for traffic scene understanding.
  • Figure 2: Overview of Scene-graph based multi-modal traffic agent (ours).