Table of Contents
Fetching ...

SDDF: Specificity-Driven Dynamic Focusing for Open-Vocabulary Camouflaged Object Detection

Jiaming Liang, Yifeng Zhan, Chunlin Liu, Weihua Zheng, Bingye Peng, Qiwei Liang, Boyang Cai, Xiaochun Mai, Qiang Nie

Abstract

Open-vocabulary object detection (OVOD) aims to detect known and unknown objects in the open world by leveraging text prompts. Benefiting from the emergence of large-scale vision--language pre-trained models, OVOD has demonstrated strong zero-shot generalization capabilities. However, when dealing with camouflaged objects, the detector often fails to distinguish and localize objects because the visual features of the objects and the background are highly similar. To bridge this gap, we construct a benchmark named OVCOD-D by augmenting carefully selected camouflaged object images with fine-grained textual descriptions. Due to the limited scale of available camouflaged object datasets, we adopt detectors pre-trained on large-scale object detection datasets as our baseline methods, as they possess stronger zero-shot generalization ability. In the specificity-aware sub-descriptions generated by multimodal large models, there still exist confusing and overly decorative modifiers. To mitigate such interference, we design a sub-description principal component contrastive fusion strategy that reduces noisy textual components. Furthermore, to address the challenge that the visual features of camouflaged objects are highly similar to those of their surrounding environment, we propose a specificity-guided regional weak alignment and dynamic focusing method, which aims to strengthen the detector's ability to discriminate camouflaged objects from background. Under the open-set evaluation setting, the proposed method achieves an AP of 56.4 on the OVCOD-D benchmark.

SDDF: Specificity-Driven Dynamic Focusing for Open-Vocabulary Camouflaged Object Detection

Abstract

Open-vocabulary object detection (OVOD) aims to detect known and unknown objects in the open world by leveraging text prompts. Benefiting from the emergence of large-scale vision--language pre-trained models, OVOD has demonstrated strong zero-shot generalization capabilities. However, when dealing with camouflaged objects, the detector often fails to distinguish and localize objects because the visual features of the objects and the background are highly similar. To bridge this gap, we construct a benchmark named OVCOD-D by augmenting carefully selected camouflaged object images with fine-grained textual descriptions. Due to the limited scale of available camouflaged object datasets, we adopt detectors pre-trained on large-scale object detection datasets as our baseline methods, as they possess stronger zero-shot generalization ability. In the specificity-aware sub-descriptions generated by multimodal large models, there still exist confusing and overly decorative modifiers. To mitigate such interference, we design a sub-description principal component contrastive fusion strategy that reduces noisy textual components. Furthermore, to address the challenge that the visual features of camouflaged objects are highly similar to those of their surrounding environment, we propose a specificity-guided regional weak alignment and dynamic focusing method, which aims to strengthen the detector's ability to discriminate camouflaged objects from background. Under the open-set evaluation setting, the proposed method achieves an AP of 56.4 on the OVCOD-D benchmark.

Paper Structure

This paper contains 32 sections, 10 equations, 6 figures, 10 tables.

Figures (6)

  • Figure 1: We perform zero-shot detection with the YOLO-World-M cheng2024yolo on both the LVIS gupta2019lvis dataset and our OVCOD-D dataset. By comparing the AP of the overlapping categories across the two datasets, we observe a substantial performance decline on OVCOD-D, indicating that open-vocabulary detectors face significant challenges when dealing with camouflaged objects.
  • Figure 2: We conducted a statistical analysis of per-class instance counts in the OVCOD-D dataset. In the bar chart, blue bars denote base classes and red bars denote novel classes, revealing a pronounced long-tailed distribution of class frequencies. Additionally, using the fine-grained textual descriptions associated with each category, we selected 25 categories characterized by higher lexical richness to construct a multidimensional quality-analysis heatmap. The horizontal axis reports, in sequence, lexical diversity, average tokens, unique words, average unique-word ratio, and the standard deviation of sentence length.
  • Figure 3: Overall architecture of the proposed specificity-driven open-vocabulary camouflaged object detector. Fine-grained textual sub-descriptions are encoded by a text encoder and decorrelated via SVD, then refined through an adapter and integrated with visual object embeddings in the contrastive fusion module. The image is processed by a lightweight YOLO‑style backbone and a PAN to extract multi‑scale features, which are transformed into object embeddings and fed in parallel into the SF‑GLU and the box head.
  • Figure 4: Construction pipeline of OVCOD-D dataset. We extend COD10K-D, NC4K-D, and clean CAMO-D with YOLO-style detection labels and an additional red imported fire ant nest subset, then reorganize them into 40 base and 47 novel classes. Qwen3-VL-Plus generates fine-grained image descriptions from which we derive a semantic prompt library for open-vocabulary camouflaged object detection.
  • Figure A1: Quantitative comparison is conducted via visualization of the detection bounding boxes.
  • ...and 1 more figures