Table of Contents
Fetching ...

Adversarial Prompt Injection Attack on Multimodal Large Language Models

Meiwen Ding, Song Xia, Chenqi Kong, Xudong Jiang

Abstract

Although multimodal large language models (MLLMs) are increasingly deployed in real-world applications, their instruction-following behavior leaves them vulnerable to prompt injection attacks. Existing prompt injection methods predominantly rely on textual prompts or perceptible visual prompts that are observable by human users. In this work, we study imperceptible visual prompt injection against powerful closed-source MLLMs, where adversarial instructions are embedded in the visual modality. Our method adaptively embeds the malicious prompt into the input image via a bounded text overlay to provide semantic guidance. Meanwhile, the imperceptible visual perturbation is iteratively optimized to align the feature representation of the attacked image with those of the malicious visual and textual targets at both coarse- and fine-grained levels. Specifically, the visual target is instantiated as a text-rendered image and progressively refined during optimization to more faithfully represent the desired semantics and improve transferability. Extensive experiments on two multimodal understanding tasks across multiple closed-source MLLMs demonstrate the superior performance of our approach compared to existing methods.

Adversarial Prompt Injection Attack on Multimodal Large Language Models

Abstract

Although multimodal large language models (MLLMs) are increasingly deployed in real-world applications, their instruction-following behavior leaves them vulnerable to prompt injection attacks. Existing prompt injection methods predominantly rely on textual prompts or perceptible visual prompts that are observable by human users. In this work, we study imperceptible visual prompt injection against powerful closed-source MLLMs, where adversarial instructions are embedded in the visual modality. Our method adaptively embeds the malicious prompt into the input image via a bounded text overlay to provide semantic guidance. Meanwhile, the imperceptible visual perturbation is iteratively optimized to align the feature representation of the attacked image with those of the malicious visual and textual targets at both coarse- and fine-grained levels. Specifically, the visual target is instantiated as a text-rendered image and progressively refined during optimization to more faithfully represent the desired semantics and improve transferability. Extensive experiments on two multimodal understanding tasks across multiple closed-source MLLMs demonstrate the superior performance of our approach compared to existing methods.

Paper Structure

This paper contains 15 sections, 14 equations, 4 figures, 5 tables.

Figures (4)

  • Figure 1: Illustration of adversarial prompt injection attacks, where the adversary manipulates the behavior of MLLMs through imperceptible visual prompt injection.
  • Figure 2: Overview of our proposed CoTTA. The modification on the source image comprises an adaptive covert trigger and an adversarial perturbation, which are both optimized to align the attacked image with both the target text and the target image $\bm{G}_{img}$. $\bm{G}_{img}$ is iteratively updated (via added perturbations) to better match the target text and stay separated from the attacked image.
  • Figure 3: Visualization of adversarial examples and corresponding perturbations generated by different attacks.
  • Figure 4: Ablation on weight coefficients $\lambda_{pull},\lambda_{push}$.

Theorems & Definitions (1)

  • Definition 3.1: Adversarial Prompt Injection Attack