Table of Contents
Fetching ...

Knowledge is Power: Advancing Few-shot Action Recognition with Multimodal Semantics from MLLMs

Jiazheng Xing, Chao Xu, Hangjie Yuan, Mengmeng Wang, Jun Dan, Hangwei Qian, Yong Liu

Abstract

Multimodal Large Language Models (MLLMs) have propelled the field of few-shot action recognition (FSAR). However, preliminary explorations in this area primarily focus on generating captions to form a suboptimal feature->caption->feature pipeline and adopt metric learning solely within the visual space. In this paper, we propose FSAR-LLaVA, the first end-to-end method to leverage MLLMs (such as Video-LLaVA) as a multimodal knowledge base for directly enhancing FSAR. First, at the feature level, we leverage the MLLM's multimodal decoder to extract spatiotemporally and semantically enriched representations, which are then decoupled and enhanced by our Multimodal Feature-Enhanced Module into distinct visual and textual features that fully exploit their semantic knowledge for FSAR. Next, we leverage the versatility of MLLMs to craft input prompts that flexibly adapt to diverse scenarios, and use their aligned outputs to drive our designed Composite Task-Oriented Prototype Construction, effectively bridging the distribution gap between meta-train and meta-test sets. Finally, to enable multimodal features to guide metric learning jointly, we introduce a training-free Multimodal Prototype Matching Metric that adaptively selects the most decisive cues and efficiently leverages the decoupled feature representations produced by MLLMs. Extensive experiments demonstrate superior performance across various tasks with minimal trainable parameters.

Knowledge is Power: Advancing Few-shot Action Recognition with Multimodal Semantics from MLLMs

Abstract

Multimodal Large Language Models (MLLMs) have propelled the field of few-shot action recognition (FSAR). However, preliminary explorations in this area primarily focus on generating captions to form a suboptimal feature->caption->feature pipeline and adopt metric learning solely within the visual space. In this paper, we propose FSAR-LLaVA, the first end-to-end method to leverage MLLMs (such as Video-LLaVA) as a multimodal knowledge base for directly enhancing FSAR. First, at the feature level, we leverage the MLLM's multimodal decoder to extract spatiotemporally and semantically enriched representations, which are then decoupled and enhanced by our Multimodal Feature-Enhanced Module into distinct visual and textual features that fully exploit their semantic knowledge for FSAR. Next, we leverage the versatility of MLLMs to craft input prompts that flexibly adapt to diverse scenarios, and use their aligned outputs to drive our designed Composite Task-Oriented Prototype Construction, effectively bridging the distribution gap between meta-train and meta-test sets. Finally, to enable multimodal features to guide metric learning jointly, we introduce a training-free Multimodal Prototype Matching Metric that adaptively selects the most decisive cues and efficiently leverages the decoupled feature representations produced by MLLMs. Extensive experiments demonstrate superior performance across various tasks with minimal trainable parameters.

Paper Structure

This paper contains 34 sections, 12 equations, 6 figures, 13 tables.

Figures (6)

  • Figure 1: (a), (b), and (c) indicate the pipeline of the data-driven unimodal/multimodal method, knowledge-driven CapFSAR, and our FSAR-LLaVA, respectively. Our FSAR-LLaVA$_\mathrm{Unknown}$, which uses the fixed input prompt: "What's the action of the video?" without introducing additional textual label information, fully leverages the multimodal features of MLLM and achieves state‑of‑the‑art performance that requires minimal parameters, as depicted in part (d), which refers to the performance comparison in the HMDB51 5-way 1-shot task.
  • Figure 2: Overview of our FSAR-LLaVA: Visual inputs and text prompts are processed through our knowledge base to extract multimodal tokens $\textbf{T}_m$ from the multimodal decoder's hidden layer. These tokens are downsampled and decoupled into visual tokens $\textbf{T}_v$ and textual tokens $\textbf{T}_t$. The tokens are then passed through the Multimodal Feature-enhanced Module, and features from both branches are processed in the Composite Task-oriented Prototype Construction Module to obtain the enhanced support set class prototypes $\widetilde{\textbf{P}^\mathcal{S}_v}$ and $\widetilde{\textbf{P}^\mathcal{S}_t}$. Finally, the class prototypes and query features are fed into the Multimodal Matching Metric to yield the probability distribution $\textbf{p}_{\mathcal{Q}2\mathcal{S}}$ and loss $\mathcal{L}_{\mathcal{Q}2\mathcal{S}}$.
  • Figure 3: Overview of our CTPCM, which is divided into local prototype construction and global prototype construction.
  • Figure 4: Visualization of the attention map for the data-driven method Hyrsm wang2022hybrid and our FSAR-LLaVA on HMDB51. We also provide the Video-LLaVA's QA results for these videos using "Unknown" prompts.
  • Figure 5: Direct qualitative analysis of the Video-LLaVA's output with different types of prompts. For clarity, we highlight words in blue to represent accurate action labels, in red to reveal the question posed in Video-LLaVA, and in green to indicate inaccurate action descriptions.
  • ...and 1 more figures