Table of Contents
Fetching ...

Revisiting Backdoor Attacks against Large Vision-Language Models from Domain Shift

Siyuan Liang, Jiawei Liang, Tianyu Pang, Chao Du, Aishan Liu, Mingli Zhu, Xiaochun Cao, Dacheng Tao

TL;DR

This work introduces backdoor domain generalization as a new axis to evaluate LVLM robustness under visual and text distribution shifts during instruction tuning. It shows that traditional backdoors can generalize across domains when triggers are domain-agnostic and strategically placed, and introduces MABA, a multimodal attribution-based attack that significantly boosts cross-domain generalization. Through extensive experiments on OpenFlamingo, Blip-2, and Otter, the authors demonstrate that MABA achieves up to 97% ASR at 0.2% poisoning and improves ASR-G by up to 114% relative to baselines, revealing serious security concerns for LVLMs even without test-data access. The study also provides a framework for cross-domain evaluation and highlights limitations in current defenses, underscoring the need for robust safeguards in multimodal instruction-tuning pipelines.

Abstract

Instruction tuning enhances large vision-language models (LVLMs) but increases their vulnerability to backdoor attacks due to their open design. Unlike prior studies in static settings, this paper explores backdoor attacks in LVLM instruction tuning across mismatched training and testing domains. We introduce a new evaluation dimension, backdoor domain generalization, to assess attack robustness under visual and text domain shifts. Our findings reveal two insights: (1) backdoor generalizability improves when distinctive trigger patterns are independent of specific data domains or model architectures, and (2) the competitive interaction between trigger patterns and clean semantic regions, where guiding the model to predict triggers enhances attack generalizability. Based on these insights, we propose a multimodal attribution backdoor attack (MABA) that injects domain-agnostic triggers into critical areas using attributional interpretation. Experiments with OpenFlamingo, Blip-2, and Otter show that MABA significantly boosts the attack success rate of generalization by 36.4%, achieving a 97% success rate at a 0.2% poisoning rate. This study reveals limitations in current evaluations and highlights how enhanced backdoor generalizability poses a security threat to LVLMs, even without test data access.

Revisiting Backdoor Attacks against Large Vision-Language Models from Domain Shift

TL;DR

This work introduces backdoor domain generalization as a new axis to evaluate LVLM robustness under visual and text distribution shifts during instruction tuning. It shows that traditional backdoors can generalize across domains when triggers are domain-agnostic and strategically placed, and introduces MABA, a multimodal attribution-based attack that significantly boosts cross-domain generalization. Through extensive experiments on OpenFlamingo, Blip-2, and Otter, the authors demonstrate that MABA achieves up to 97% ASR at 0.2% poisoning and improves ASR-G by up to 114% relative to baselines, revealing serious security concerns for LVLMs even without test-data access. The study also provides a framework for cross-domain evaluation and highlights limitations in current defenses, underscoring the need for robust safeguards in multimodal instruction-tuning pipelines.

Abstract

Instruction tuning enhances large vision-language models (LVLMs) but increases their vulnerability to backdoor attacks due to their open design. Unlike prior studies in static settings, this paper explores backdoor attacks in LVLM instruction tuning across mismatched training and testing domains. We introduce a new evaluation dimension, backdoor domain generalization, to assess attack robustness under visual and text domain shifts. Our findings reveal two insights: (1) backdoor generalizability improves when distinctive trigger patterns are independent of specific data domains or model architectures, and (2) the competitive interaction between trigger patterns and clean semantic regions, where guiding the model to predict triggers enhances attack generalizability. Based on these insights, we propose a multimodal attribution backdoor attack (MABA) that injects domain-agnostic triggers into critical areas using attributional interpretation. Experiments with OpenFlamingo, Blip-2, and Otter show that MABA significantly boosts the attack success rate of generalization by 36.4%, achieving a 97% success rate at a 0.2% poisoning rate. This study reveals limitations in current evaluations and highlights how enhanced backdoor generalizability poses a security threat to LVLMs, even without test data access.

Paper Structure

This paper contains 13 sections, 8 equations, 9 figures, 2 tables.

Figures (9)

  • Figure 1: Illustration of backdoor attack during LVLM instruction-tuning. Despite successful poisoning, domain shift between attacker's and user's instructions may prevent trigger activation.
  • Figure 2: Overview of our backdoor domain generalization framework. We construct a multimodal domain-shifted dataset (a), evaluate three backdoor attacks (b), and design a multimodal attribute backdoor attack to improve attack generalization (c).
  • Figure 3: Statistical analysis of domain shifts in multimodal instruction sets.
  • Figure 4: Attack performance comparison across poisoning rates on different datasets.
  • Figure 5: Domain generalizability of text attacks under question domain shifts in the COCO dataset.
  • ...and 4 more figures