Table of Contents
Fetching ...

RS-SSM: Refining Forgotten Specifics in State Space Model for Video Semantic Segmentation

Kai Zhu, Zhenyu Cui, Zehua Zang, Jiahuan Zhou

Abstract

Recently, state space models have demonstrated efficient video segmentation through linear-complexity state space compression. However, Video Semantic Segmentation (VSS) requires pixel-level spatiotemporal modeling capabilities to maintain temporal consistency in segmentation of semantic objects. While state space models can preserve common semantic information during state space compression, the fixed-size state space inevitably forgets specific information, which limits the models' capability for pixel-level segmentation. To tackle the above issue, we proposed a Refining Specifics State Space Model approach (RS-SSM) for video semantic segmentation, which performs complementary refining of forgotten spatiotemporal specifics. Specifically, a Channel-wise Amplitude Perceptron (CwAP) is designed to extract and align the distribution characteristics of specific information in the state space. Besides, a Forgetting Gate Information Refiner (FGIR) is proposed to adaptively invert and refine the forgetting gate matrix in the state space model based on the specific information distribution. Consequently, our RS-SSM leverages the inverted forgetting gate to complementarily refine the specific information forgotten during state space compression, thereby enhancing the model's capability for spatiotemporal pixel-level segmentation. Extensive experiments on four VSS benchmarks demonstrate that our RS-SSM achieves state-of-the-art performance while maintaining high computational efficiency. The code is available at https://github.com/zhoujiahuan1991/CVPR2026-RS-SSM.

RS-SSM: Refining Forgotten Specifics in State Space Model for Video Semantic Segmentation

Abstract

Recently, state space models have demonstrated efficient video segmentation through linear-complexity state space compression. However, Video Semantic Segmentation (VSS) requires pixel-level spatiotemporal modeling capabilities to maintain temporal consistency in segmentation of semantic objects. While state space models can preserve common semantic information during state space compression, the fixed-size state space inevitably forgets specific information, which limits the models' capability for pixel-level segmentation. To tackle the above issue, we proposed a Refining Specifics State Space Model approach (RS-SSM) for video semantic segmentation, which performs complementary refining of forgotten spatiotemporal specifics. Specifically, a Channel-wise Amplitude Perceptron (CwAP) is designed to extract and align the distribution characteristics of specific information in the state space. Besides, a Forgetting Gate Information Refiner (FGIR) is proposed to adaptively invert and refine the forgetting gate matrix in the state space model based on the specific information distribution. Consequently, our RS-SSM leverages the inverted forgetting gate to complementarily refine the specific information forgotten during state space compression, thereby enhancing the model's capability for spatiotemporal pixel-level segmentation. Extensive experiments on four VSS benchmarks demonstrate that our RS-SSM achieves state-of-the-art performance while maintaining high computational efficiency. The code is available at https://github.com/zhoujiahuan1991/CVPR2026-RS-SSM.

Paper Structure

This paper contains 18 sections, 20 equations, 4 figures, 3 tables.

Figures (4)

  • Figure 1: Existing SSM-based VSS methods lose spatiotemporal specifics when perform state space compression, limiting the model's segmentation accuracy. In contrast, our proposed RS-SSM method guides the model to focus on forgotten spatiotemporal specifics, thereby improving the pixel-level semantic segmentation performance.
  • Figure 2: The pipeline of our RS-SSM. Following previous work hesham2025exploitingxie2021segformer, we use an image encoder to extract feature maps for each frame. After linear projection, we extract spectrum features through the CwAP module to quantify the channel distribution of specific information. Then our FGIR module adaptively inverts and refines the forgetting gate from the SSM $\theta_2$, thereby encouraging SSM $\theta_1$ to perform complementary refining of forgotten spatiotemporal specifics.
  • Figure 3: Visualization of the updating gate $\mathbf{\overline{B}}_d$ as mentioned in Eq \ref{['eq:gate']}. Influenced by the forgetting gate, the updating gate reflects the amount of new information introduced at each time step. The bottom row visualization reveals that the vanilla SSM $\theta_2$ suffers from information loss of specifics during the state space compression. Conversely, as illustrated in the top row, the SSM $\theta_1$ effectively refines these forgotten specifics after inverting and refining the forgetting gate.
  • Figure 4: Visualization of segmentation results on VSPW dataset miao2021vspw. Compared to the existing SSM-based method TV3S hesham2025exploiting, our RS-SSM produces more accurate and detailed segmentation results by effectively refining specific information in videos.