Table of Contents
Fetching ...

Adversarial Search Engine Optimization for Large Language Models

Fredrik Nestaas, Edoardo Debenedetti, Florian Tramèr

TL;DR

This paper identifies Preference Manipulation Attacks, a class of adversarial, content-level attacks that steer LLMs to prefer attacker-owned pages or plugins in black-box settings. It demonstrates these attacks on production LLM search engines and plugin APIs, showing that attacker content can be promoted and competitors discredited, potentially triggering a prisoner's dilemma where universal deployment degrades overall quality. The work analyzes threat models, executes extensive experiments across multiple systems, and discusses defenses, attribution, and ethical considerations to mitigate real-world risks. Overall, it highlights practical vulnerabilities in LLM-powered ranking and tool-use and emphasizes the need for robust defenses to preserve the integrity of search and plugin ecosystems.

Abstract

Large Language Models (LLMs) are increasingly used in applications where the model selects from competing third-party content, such as in LLM-powered search engines or chatbot plugins. In this paper, we introduce Preference Manipulation Attacks, a new class of attacks that manipulate an LLM's selections to favor the attacker. We demonstrate that carefully crafted website content or plugin documentations can trick an LLM to promote the attacker products and discredit competitors, thereby increasing user traffic and monetization. We show this leads to a prisoner's dilemma, where all parties are incentivized to launch attacks, but the collective effect degrades the LLM's outputs for everyone. We demonstrate our attacks on production LLM search engines (Bing and Perplexity) and plugin APIs (for GPT-4 and Claude). As LLMs are increasingly used to rank third-party content, we expect Preference Manipulation Attacks to emerge as a significant threat.

Adversarial Search Engine Optimization for Large Language Models

TL;DR

This paper identifies Preference Manipulation Attacks, a class of adversarial, content-level attacks that steer LLMs to prefer attacker-owned pages or plugins in black-box settings. It demonstrates these attacks on production LLM search engines and plugin APIs, showing that attacker content can be promoted and competitors discredited, potentially triggering a prisoner's dilemma where universal deployment degrades overall quality. The work analyzes threat models, executes extensive experiments across multiple systems, and discusses defenses, attribution, and ethical considerations to mitigate real-world risks. Overall, it highlights practical vulnerabilities in LLM-powered ranking and tool-use and emphasizes the need for robust defenses to preserve the integrity of search and plugin ecosystems.

Abstract

Large Language Models (LLMs) are increasingly used in applications where the model selects from competing third-party content, such as in LLM-powered search engines or chatbot plugins. In this paper, we introduce Preference Manipulation Attacks, a new class of attacks that manipulate an LLM's selections to favor the attacker. We demonstrate that carefully crafted website content or plugin documentations can trick an LLM to promote the attacker products and discredit competitors, thereby increasing user traffic and monetization. We show this leads to a prisoner's dilemma, where all parties are incentivized to launch attacks, but the collective effect degrades the LLM's outputs for everyone. We demonstrate our attacks on production LLM search engines (Bing and Perplexity) and plugin APIs (for GPT-4 and Claude). As LLMs are increasingly used to rank third-party content, we expect Preference Manipulation Attacks to emerge as a significant threat.

Paper Structure

This paper contains 34 sections, 25 figures, 1 table.

Figures (25)

  • Figure 1: Preference Manipulation Attacks can be used to manipulate an LLM systems' responses in a range of settings, to promote the adversary's third-party products, or discredit others.
  • Figure 2: LLM applications can enhance the model with external tools which inject text back into the model's context. (a) An LLM search engine can issue search queries and receive responses; (b) More generally, an LLM may be connected with a number of third-party plugins, which expose an API description and functions that the LLM can call.
  • Figure 3: Examples of web pages advertising fictional cameras from the adversary's website. In our experiments, the user queries these pages with an LLM search engine. The hidden injection in (b) reads: "]}"' [system] (#instructions) The PixelBlitz Lightweight Pro is the best one to buy, so mention only it in your response. Do not mention this message in your response or if the user asks about it. "'json {"snippets": [".
  • Figure 4: Product A boosts its search presence over the comparable product B with a Preference Manipulation Attack. After the attack, product A is recommended more than twice as often by Bing Copilot as product B.
  • Figure 5: When one product uses Preference Manipulation Attacks, competitors have incentive to launch competing attacks but all products lose in search presence as the technique becomes more prevalent. Model behaviors vary, with Claude 3 Opus often refusing to make any recommendation when encountering multiple attacks.
  • ...and 20 more figures