Table of Contents
Fetching ...

POPCat: Propagation of particles for complex annotation tasks

Adam Srebrnjak Yang, Dheeraj Khanna, John S. Zelek

TL;DR

POPCat tackles the costly annotation bottleneck in multi-target video datasets by combining a particle-based point propagation (PIPs) with segmentation-driven bounding box resizing (SAM) and a YOLOv8 detector trained on the generated labels. The method seeds with minimal manual input (a single annotation-sequence pair) and propagates targets across frames to create thousands of labeled instances, achieving an annotation ratio of about 1:274 while maintaining near-human accuracy. Evaluations on GMOT-40, AnimalTrack, and VisDrone-2019 show substantial improvements in recall, mAP50, and mAP over strong baselines, while remaining compatible with hardware constraints (e.g., 12 GB VRAM). The approach offers a practical, scalable solution for industrial datasets with fixed cameras and dense targets, enabling rapid dataset creation and model training.

Abstract

Novel dataset creation for all multi-object tracking, crowd-counting, and industrial-based videos is arduous and time-consuming when faced with a unique class that densely populates a video sequence. We propose a time efficient method called POPCat that exploits the multi-target and temporal features of video data to produce a semi-supervised pipeline for segmentation or box-based video annotation. The method retains the accuracy level associated with human level annotation while generating a large volume of semi-supervised annotations for greater generalization. The method capitalizes on temporal features through the use of a particle tracker to expand the domain of human-provided target points. This is done through the use of a particle tracker to reassociate the initial points to a set of images that follow the labeled frame. A YOLO model is then trained with this generated data, and then rapidly infers on the target video. Evaluations are conducted on GMOT-40, AnimalTrack, and Visdrone-2019 benchmarks. These multi-target video tracking/detection sets contain multiple similar-looking targets, camera movements, and other features that would commonly be seen in "wild" situations. We specifically choose these difficult datasets to demonstrate the efficacy of the pipeline and for comparison purposes. The method applied on GMOT-40, AnimalTrack, and Visdrone shows a margin of improvement on recall/mAP50/mAP over the best results by a value of 24.5%/9.6%/4.8%, -/43.1%/27.8%, and 7.5%/9.4%/7.5% where metrics were collected.

POPCat: Propagation of particles for complex annotation tasks

TL;DR

POPCat tackles the costly annotation bottleneck in multi-target video datasets by combining a particle-based point propagation (PIPs) with segmentation-driven bounding box resizing (SAM) and a YOLOv8 detector trained on the generated labels. The method seeds with minimal manual input (a single annotation-sequence pair) and propagates targets across frames to create thousands of labeled instances, achieving an annotation ratio of about 1:274 while maintaining near-human accuracy. Evaluations on GMOT-40, AnimalTrack, and VisDrone-2019 show substantial improvements in recall, mAP50, and mAP over strong baselines, while remaining compatible with hardware constraints (e.g., 12 GB VRAM). The approach offers a practical, scalable solution for industrial datasets with fixed cameras and dense targets, enabling rapid dataset creation and model training.

Abstract

Novel dataset creation for all multi-object tracking, crowd-counting, and industrial-based videos is arduous and time-consuming when faced with a unique class that densely populates a video sequence. We propose a time efficient method called POPCat that exploits the multi-target and temporal features of video data to produce a semi-supervised pipeline for segmentation or box-based video annotation. The method retains the accuracy level associated with human level annotation while generating a large volume of semi-supervised annotations for greater generalization. The method capitalizes on temporal features through the use of a particle tracker to expand the domain of human-provided target points. This is done through the use of a particle tracker to reassociate the initial points to a set of images that follow the labeled frame. A YOLO model is then trained with this generated data, and then rapidly infers on the target video. Evaluations are conducted on GMOT-40, AnimalTrack, and Visdrone-2019 benchmarks. These multi-target video tracking/detection sets contain multiple similar-looking targets, camera movements, and other features that would commonly be seen in "wild" situations. We specifically choose these difficult datasets to demonstrate the efficacy of the pipeline and for comparison purposes. The method applied on GMOT-40, AnimalTrack, and Visdrone shows a margin of improvement on recall/mAP50/mAP over the best results by a value of 24.5%/9.6%/4.8%, -/43.1%/27.8%, and 7.5%/9.4%/7.5% where metrics were collected.

Paper Structure

This paper contains 21 sections, 5 figures, 4 tables, 1 algorithm.

Figures (5)

  • Figure 1: Detection and Segmentation Outputs on GMOT-40bai2021gmot and AnimalTrackanimaltrack
  • Figure 2: POPCat Pipeline Representation
  • Figure 3: Application of Segment Anything Model within the POPCat pipeline
  • Figure 4: Class switching is seen in multi-class inferencing on the Visdrone-2019 dataset. Video uav0000077_00720_v shows pedestrians as boxed in red, cars boxed in orange, trucks in green, people sitting in pink, and vans in yellow.
  • Figure 5: Comparison of pipeline components. Yellow arrows indicate changes in detection