Table of Contents
Fetching ...

Consistency Beyond Contrast: Enhancing Open-Vocabulary Object Detection Robustness via Contextual Consistency Learning

Bozhao Li, Shaocong Wu, Tong Shao, Senqiao Yang, Qiben Shan, Zhuotao Tian, Jingyong Su

Abstract

Recent advances in open-vocabulary object detection focus primarily on two aspects: scaling up datasets and leveraging contrastive learning to align language and vision modalities. However, these approaches often neglect internal consistency within a single modality, particularly when background or environmental changes occur. This lack of consistency leads to a performance drop because the model struggles to detect the same object in different scenes, which reveals a robustness gap. To address this issue, we introduce Contextual Consistency Learning (CCL), a novel framework that integrates two key strategies: Contextual Bootstrapped Data Generation (CBDG) and Contextual Consistency Loss (CCLoss). CBDG functions as a data generation mechanism, producing images that contain the same objects across diverse backgrounds. This is essential because existing datasets alone do not support our CCL framework. The CCLoss further enforces the invariance of object features despite environmental changes, thereby improving the model's robustness in different scenes. These strategies collectively form a unified framework for ensuring contextual consistency within the same modality. Our method achieves state-of-the-art performance, surpassing previous approaches by +16.3 AP on OmniLabel and +14.9 AP on D3. These results demonstrate the importance of enforcing intra-modal consistency, significantly enhancing model generalization in diverse environments. Our code is publicly available at: https://github.com/bozhao-li/CCL.

Consistency Beyond Contrast: Enhancing Open-Vocabulary Object Detection Robustness via Contextual Consistency Learning

Abstract

Recent advances in open-vocabulary object detection focus primarily on two aspects: scaling up datasets and leveraging contrastive learning to align language and vision modalities. However, these approaches often neglect internal consistency within a single modality, particularly when background or environmental changes occur. This lack of consistency leads to a performance drop because the model struggles to detect the same object in different scenes, which reveals a robustness gap. To address this issue, we introduce Contextual Consistency Learning (CCL), a novel framework that integrates two key strategies: Contextual Bootstrapped Data Generation (CBDG) and Contextual Consistency Loss (CCLoss). CBDG functions as a data generation mechanism, producing images that contain the same objects across diverse backgrounds. This is essential because existing datasets alone do not support our CCL framework. The CCLoss further enforces the invariance of object features despite environmental changes, thereby improving the model's robustness in different scenes. These strategies collectively form a unified framework for ensuring contextual consistency within the same modality. Our method achieves state-of-the-art performance, surpassing previous approaches by +16.3 AP on OmniLabel and +14.9 AP on D3. These results demonstrate the importance of enforcing intra-modal consistency, significantly enhancing model generalization in diverse environments. Our code is publicly available at: https://github.com/bozhao-li/CCL.

Paper Structure

This paper contains 53 sections, 11 equations, 8 figures, 10 tables, 1 algorithm.

Figures (8)

  • Figure 1: Performance and robustness comparison of different methods. (a) Our approach, with Contextual Consistency Learning, achieves the best overall results, reaching a normalized score of 1 in all metrics. (b,c) Benchmark backgrounds are altered to test robustness. Tested on $D^3$BC, baseline methods degrade, while ours remains stable. See Section \ref{['sec:robust_D3BC']} for details.
  • Figure 2: Overview of our approach. CBDG generates $D_j$ via Categorical Augmentation, Background Generation and Background Replacement. CCL training uses $D_j$ with CCLoss added to total loss.
  • Figure 3: CBDG Pipeline. We use ChatGPT to generate background prompts for a diffusion model, enabling diverse background synthesis. For single-class images, CBDG augments object categories before background replacement. For multi-class images, CBDG replaces only the background.
  • Figure 4: Four groups of images are shown, each composed of four sub-images: the leftmost sub-image in every group is the original, while the remaining three display background replacements.
  • Figure 5: CCL Framework. Visual and textual features are encoded, with regional features pooled into CAAF. Consistency loss is applied within each modality.
  • ...and 3 more figures