Table of Contents
Fetching ...

SkeletonContext: Skeleton-side Context Prompt Learning for Zero-Shot Skeleton-based Action Recognition

Ning Wang, Tieyue Wu, Naeha Sharif, Farid Boussaid, Guangming Zhu, Lin Mei, Mohammed Bennamoun, zhang liang

Abstract

Zero-shot skeleton-based action recognition aims to recognize unseen actions by transferring knowledge from seen categories through semantic descriptions. Most existing methods typically align skeleton features with textual embeddings within a shared latent space. However, the absence of contextual cues, such as objects involved in the action, introduces an inherent gap between skeleton and semantic representations, making it difficult to distinguish visually similar actions. To address this, we propose SkeletonContext, a prompt-based framework that enriches skeletal motion representations with language-driven contextual semantics. Specifically, we introduce a Cross-Modal Context Prompt Module, which leverages a pretrained language model to reconstruct masked contextual prompts under guidance derived from LLMs. This design effectively transfers linguistic context to the skeleton encoder for instance-level semantic grounding and improved cross-modal alignment. In addition, a Key-Part Decoupling Module is incorporated to decouple motion-relevant joint features, ensuring robust action understanding even in the absence of explicit object interactions. Extensive experiments on multiple benchmarks demonstrate that SkeletonContext achieves state-of-the-art performance under both conventional and generalized zero-shot settings, validating its effectiveness in reasoning about context and distinguishing fine-grained, visually similar actions.

SkeletonContext: Skeleton-side Context Prompt Learning for Zero-Shot Skeleton-based Action Recognition

Abstract

Zero-shot skeleton-based action recognition aims to recognize unseen actions by transferring knowledge from seen categories through semantic descriptions. Most existing methods typically align skeleton features with textual embeddings within a shared latent space. However, the absence of contextual cues, such as objects involved in the action, introduces an inherent gap between skeleton and semantic representations, making it difficult to distinguish visually similar actions. To address this, we propose SkeletonContext, a prompt-based framework that enriches skeletal motion representations with language-driven contextual semantics. Specifically, we introduce a Cross-Modal Context Prompt Module, which leverages a pretrained language model to reconstruct masked contextual prompts under guidance derived from LLMs. This design effectively transfers linguistic context to the skeleton encoder for instance-level semantic grounding and improved cross-modal alignment. In addition, a Key-Part Decoupling Module is incorporated to decouple motion-relevant joint features, ensuring robust action understanding even in the absence of explicit object interactions. Extensive experiments on multiple benchmarks demonstrate that SkeletonContext achieves state-of-the-art performance under both conventional and generalized zero-shot settings, validating its effectiveness in reasoning about context and distinguishing fine-grained, visually similar actions.

Paper Structure

This paper contains 23 sections, 11 equations, 4 figures, 7 tables.

Figures (4)

  • Figure 1: Comparison between existing ZSSAR methods and our proposed SkeletonContext. Conventional ZSSAR methods directly align skeleton features with textual descriptions, but the absence of contextual cues creates a semantic gap that hinders discrimination between similar actions. In contrast, SkeletonContext reconstructs language-driven contextual semantics (e.g., objects) and injects them into skeleton representations, enabling fine-grained, context-aware zero-shot action recognition.
  • Figure 2: Overview of SkeletonContext. The model enriches skeleton features via Cross-Modal Context Prompt Module (Sec \ref{['sec:context_reconstruction_module']}) and Key-Part Decoupling Module (Sec \ref{['sec:keypart']}) for context-aware zero-shot action recognition. Both modules are guided by LLM–derived contextual knowledge (Sec \ref{['sec:context_descriptions']}) during training, enabling semantic grounding and cross-modal alignment.
  • Figure 3: Qualitative results of context reconstruction. SkeletonContext can infer contextual semantics, enabling clear distinction between visually similar actions during inference.
  • Figure 4: Visualization of key-part decoupling. The KPD module highlights motion-critical joints guided by language priors, revealing semantically relevant body parts for each action.