Table of Contents
Fetching ...

PVUW 2024 Challenge on Complex Video Understanding: Methods and Results

Henghui Ding, Chang Liu, Yunchao Wei, Nikhila Ravi, Shuting He, Song Bai, Philip Torr, Deshui Miao, Xin Li, Zhenyu He, Yaowei Wang, Ming-Hsuan Yang, Zhensong Xu, Jiangtao Yao, Chengjing Wu, Ting Liu, Luoqi Liu, Xinyu Liu, Jing Zhang, Kexin Zhang, Yuting Yang, Licheng Jiao, Shuyuan Yang, Mingqi Gao, Jingnan Luo, Jinyu Yang, Jungong Han, Feng Zheng, Bin Cao, Yisi Zhang, Xuanxu Lin, Xingjian He, Bo Zhao, Jing Liu, Feiyu Pan, Hao Fang, Xiankai Lu

TL;DR

PVUW 2024 presents two challenging tracks, MOSE and MeViS, to advance pixel-level video understanding in realistic settings: MOSE focuses on complex VOS in crowded, occluded scenes, while MeViS investigates motion-expression guided segmentation for language-driven video understanding. Across MOSE and MeViS, the paper showcases a range of methods—from memory-augmented VOS and semantic-aware fusion to cross-modal encoders and text-guided frame/video queries—demonstrating substantial progress and practical robustness. The contributions include architectural innovations such as Fusion Block and Discriminative Query Generation, as well as cross-modal and memory-based strategies that improve segmentation under occlusion, disappearance/reappearance, and motion-driven language cues. Collectively, these results push toward robust pixel-level scene understanding in the wild, with future directions highlighting integration with Segment Anything Model and large language models to further improve real-time, language-guided video segmentation in complex environments.

Abstract

Pixel-level Video Understanding in the Wild Challenge (PVUW) focus on complex video understanding. In this CVPR 2024 workshop, we add two new tracks, Complex Video Object Segmentation Track based on MOSE dataset and Motion Expression guided Video Segmentation track based on MeViS dataset. In the two new tracks, we provide additional videos and annotations that feature challenging elements, such as the disappearance and reappearance of objects, inconspicuous small objects, heavy occlusions, and crowded environments in MOSE. Moreover, we provide a new motion expression guided video segmentation dataset MeViS to study the natural language-guided video understanding in complex environments. These new videos, sentences, and annotations enable us to foster the development of a more comprehensive and robust pixel-level understanding of video scenes in complex environments and realistic scenarios. The MOSE challenge had 140 registered teams in total, 65 teams participated the validation phase and 12 teams made valid submissions in the final challenge phase. The MeViS challenge had 225 registered teams in total, 50 teams participated the validation phase and 5 teams made valid submissions in the final challenge phase.

PVUW 2024 Challenge on Complex Video Understanding: Methods and Results

TL;DR

PVUW 2024 presents two challenging tracks, MOSE and MeViS, to advance pixel-level video understanding in realistic settings: MOSE focuses on complex VOS in crowded, occluded scenes, while MeViS investigates motion-expression guided segmentation for language-driven video understanding. Across MOSE and MeViS, the paper showcases a range of methods—from memory-augmented VOS and semantic-aware fusion to cross-modal encoders and text-guided frame/video queries—demonstrating substantial progress and practical robustness. The contributions include architectural innovations such as Fusion Block and Discriminative Query Generation, as well as cross-modal and memory-based strategies that improve segmentation under occlusion, disappearance/reappearance, and motion-driven language cues. Collectively, these results push toward robust pixel-level scene understanding in the wild, with future directions highlighting integration with Segment Anything Model and large language models to further improve real-time, language-guided video segmentation in complex environments.

Abstract

Pixel-level Video Understanding in the Wild Challenge (PVUW) focus on complex video understanding. In this CVPR 2024 workshop, we add two new tracks, Complex Video Object Segmentation Track based on MOSE dataset and Motion Expression guided Video Segmentation track based on MeViS dataset. In the two new tracks, we provide additional videos and annotations that feature challenging elements, such as the disappearance and reappearance of objects, inconspicuous small objects, heavy occlusions, and crowded environments in MOSE. Moreover, we provide a new motion expression guided video segmentation dataset MeViS to study the natural language-guided video understanding in complex environments. These new videos, sentences, and annotations enable us to foster the development of a more comprehensive and robust pixel-level understanding of video scenes in complex environments and realistic scenarios. The MOSE challenge had 140 registered teams in total, 65 teams participated the validation phase and 12 teams made valid submissions in the final challenge phase. The MeViS challenge had 225 registered teams in total, 50 teams participated the validation phase and 5 teams made valid submissions in the final challenge phase.

Paper Structure

This paper contains 25 sections, 9 figures, 2 tables.

Figures (9)

  • Figure 1: Example Videos of coMplex video Object SEgmentation (MOSE) dataset MOSE. The standout feature of the MOSE dataset is its complex scenes, which include the disappearance and reappearance of objects, small and inconspicuous objects, heavy occlusions, and crowded environments. The aim of the MOSE dataset is to foster the development of complex video understanding.
  • Figure 2: Example Videos of Motion expressions Video Segmentation (MeViS) dataset MeViS. The expressions in MeViS mainly emphasize motion attributes, making it impossible to identify the referred target object by looking at a single frame. The aim of the MeViS dataset is to foster the development of motion understanding in complex scenes.
  • Figure 3: Overall framework of PCL_VisionLab team method, 1st place solution for MOSE Challenge in CVPR 2024.
  • Figure 4: Overall framework of Yao_Xu_MTLab team method, 2nd place solution for MOSE Challenge in CVPR 2024.
  • Figure 5: Architecture of CutieCutie.
  • ...and 4 more figures