Table of Contents
Fetching ...

VideoQA-SC: Adaptive Semantic Communication for Video Question Answering

Jiangyuan Guo, Wei Chen, Yuxuan Sun, Jialong Xu, Bo Ai

TL;DR

This work presents VideoQA-SC, an end-to-end semantic communication system for video question answering that transmits video semantics rather than pixel data over noisy wireless channels. It combines a spatiotemporal semantic encoder, a dual-branch cross-attention based DJSCC transformer with a shared rate embedding, and learning-based adaptive bandwidth allocation (both content- and SNR-aware), plus a multimodal fuser to answer questions directly from transmitted semantics. The approach is trained in four stages to balance task accuracy and bandwidth usage, and is evaluated on TGIF-QA, TGIF-QA-R, and NExT-QA, showing robustness to AWGN and fading channels while achieving substantial bandwidth savings. The results indicate that VideoQA-SC can outperform traditional SSCC and pixel-reconstruction-based DJSCC schemes, with notable accuracy gains at low SNR and dramatic reductions in required bandwidth, highlighting the potential of task-oriented semantic transmission for video applications.

Abstract

Although semantic communication (SC) has shown its potential in efficiently transmitting multimodal data such as texts, speeches and images, SC for videos has focused primarily on pixel-level reconstruction. However, these SC systems may be suboptimal for downstream intelligent tasks. Moreover, SC systems without pixel-level video reconstruction present advantages by achieving higher bandwidth efficiency and real-time performance of various intelligent tasks. The difficulty in such system design lies in the extraction of task-related compact semantic representations and their accurate delivery over noisy channels. In this paper, we propose an end-to-end SC system, named VideoQA-SC for video question answering (VideoQA) tasks. Our goal is to accomplish VideoQA tasks directly based on video semantics over noisy or fading wireless channels, bypassing the need for video reconstruction at the receiver. To this end, we develop a spatiotemporal semantic encoder for effective video semantic extraction, and a learning-based bandwidth-adaptive deep joint source-channel coding (DJSCC) scheme for efficient and robust video semantic transmission. Experiments demonstrate that VideoQA-SC outperforms traditional and advanced DJSCC-based SC systems that rely on video reconstruction at the receiver under a wide range of channel conditions and bandwidth constraints. In particular, when the signal-to-noise ratio is low, VideoQA-SC can improve the answer accuracy by 5.17% while saving almost 99.5\% of the bandwidth at the same time, compared with the advanced DJSCC-based SC system. Our results show the great potential of SC system design for video applications.

VideoQA-SC: Adaptive Semantic Communication for Video Question Answering

TL;DR

This work presents VideoQA-SC, an end-to-end semantic communication system for video question answering that transmits video semantics rather than pixel data over noisy wireless channels. It combines a spatiotemporal semantic encoder, a dual-branch cross-attention based DJSCC transformer with a shared rate embedding, and learning-based adaptive bandwidth allocation (both content- and SNR-aware), plus a multimodal fuser to answer questions directly from transmitted semantics. The approach is trained in four stages to balance task accuracy and bandwidth usage, and is evaluated on TGIF-QA, TGIF-QA-R, and NExT-QA, showing robustness to AWGN and fading channels while achieving substantial bandwidth savings. The results indicate that VideoQA-SC can outperform traditional SSCC and pixel-reconstruction-based DJSCC schemes, with notable accuracy gains at low SNR and dramatic reductions in required bandwidth, highlighting the potential of task-oriented semantic transmission for video applications.

Abstract

Although semantic communication (SC) has shown its potential in efficiently transmitting multimodal data such as texts, speeches and images, SC for videos has focused primarily on pixel-level reconstruction. However, these SC systems may be suboptimal for downstream intelligent tasks. Moreover, SC systems without pixel-level video reconstruction present advantages by achieving higher bandwidth efficiency and real-time performance of various intelligent tasks. The difficulty in such system design lies in the extraction of task-related compact semantic representations and their accurate delivery over noisy channels. In this paper, we propose an end-to-end SC system, named VideoQA-SC for video question answering (VideoQA) tasks. Our goal is to accomplish VideoQA tasks directly based on video semantics over noisy or fading wireless channels, bypassing the need for video reconstruction at the receiver. To this end, we develop a spatiotemporal semantic encoder for effective video semantic extraction, and a learning-based bandwidth-adaptive deep joint source-channel coding (DJSCC) scheme for efficient and robust video semantic transmission. Experiments demonstrate that VideoQA-SC outperforms traditional and advanced DJSCC-based SC systems that rely on video reconstruction at the receiver under a wide range of channel conditions and bandwidth constraints. In particular, when the signal-to-noise ratio is low, VideoQA-SC can improve the answer accuracy by 5.17% while saving almost 99.5\% of the bandwidth at the same time, compared with the advanced DJSCC-based SC system. Our results show the great potential of SC system design for video applications.

Paper Structure

This paper contains 25 sections, 31 equations, 13 figures, 1 table, 2 algorithms.

Figures (13)

  • Figure 1: An application scenario for VideoQA-SC.
  • Figure 2: Overview of the proposed VideoQA-SC.
  • Figure 3: Temporal and spatial modeling of the $i$-th video clip. Shapes with the same color represent the same objects in different frames. Striped shapes indicate fused features after corresponding modeling. An object-level feature extractor and a frame-level feature extractor are used to preprocess all frames in the $i$-th clip for further processing.
  • Figure 4: The structure of the dual-branch cross-attention Transformer block in the JSC encoder/decoder.
  • Figure 5: The JSC encoder with adaptive bandwidth allocation. CA Transformer block denotes the proposed dual-branch cross-attention Transformer block in \ref{['Section:CA']}.
  • ...and 8 more figures