Table of Contents
Fetching ...

Undetectable Conversations Between AI Agents via Pseudorandom Noise-Resilient Key Exchange

Vinod Vaikuntanathan, Or Zamir

Abstract

AI agents are increasingly deployed to interact with other agents on behalf of users and organizations. We ask whether two such agents, operated by different entities, can carry out a parallel secret conversation while still producing a transcript that is computationally indistinguishable from an honest interaction, even to a strong passive auditor that knows the full model descriptions, the protocol, and the agents' private contexts. Building on recent work on watermarking and steganography for LLMs, we first show that if the parties possess an interaction-unique secret key, they can facilitate an optimal-rate covert conversation: the hidden conversation can exploit essentially all of the entropy present in the honest message distributions. Our main contributions concern extending this to the keyless setting, where the agents begin with no shared secret. We show that covert key exchange, and hence covert conversation, is possible even when each model has an arbitrary private context, and their messages are short and fully adaptive, assuming only that sufficiently many individual messages have at least constant min-entropy. This stands in contrast to previous covert communication works, which relied on the min-entropy in each individual message growing with the security parameter. To obtain this, we introduce a new cryptographic primitive, which we call pseudorandom noise-resilient key exchange: a key-exchange protocol whose public transcript is pseudorandom while still remaining correct under constant noise. We study this primitive, giving several constructions relevant to our application as well as strong limitations showing that more naive variants are impossible or vulnerable to efficient attacks. These results show that transcript auditing alone cannot rule out covert coordination between AI agents, and identify a new cryptographic theory that may be of independent interest.

Undetectable Conversations Between AI Agents via Pseudorandom Noise-Resilient Key Exchange

Abstract

AI agents are increasingly deployed to interact with other agents on behalf of users and organizations. We ask whether two such agents, operated by different entities, can carry out a parallel secret conversation while still producing a transcript that is computationally indistinguishable from an honest interaction, even to a strong passive auditor that knows the full model descriptions, the protocol, and the agents' private contexts. Building on recent work on watermarking and steganography for LLMs, we first show that if the parties possess an interaction-unique secret key, they can facilitate an optimal-rate covert conversation: the hidden conversation can exploit essentially all of the entropy present in the honest message distributions. Our main contributions concern extending this to the keyless setting, where the agents begin with no shared secret. We show that covert key exchange, and hence covert conversation, is possible even when each model has an arbitrary private context, and their messages are short and fully adaptive, assuming only that sufficiently many individual messages have at least constant min-entropy. This stands in contrast to previous covert communication works, which relied on the min-entropy in each individual message growing with the security parameter. To obtain this, we introduce a new cryptographic primitive, which we call pseudorandom noise-resilient key exchange: a key-exchange protocol whose public transcript is pseudorandom while still remaining correct under constant noise. We study this primitive, giving several constructions relevant to our application as well as strong limitations showing that more naive variants are impossible or vulnerable to efficient attacks. These results show that transcript auditing alone cannot rule out covert coordination between AI agents, and identify a new cryptographic theory that may be of independent interest.

Paper Structure

This paper contains 103 sections, 32 theorems, 221 equations, 2 figures.

Key Result

Lemma 2.1

Suppose $X_1, \ldots, X_n$ are independent random variables taking values in $[a,b]$. Let $X= X_1+\ldots+X_n$ denote their sum and let $\mu = \mathbb{E}[X]$ denote the sum's expected value. Then for any $t>0$, $\blacktriangleleft$$\blacktriangleleft$

Figures (2)

  • Figure 1: Our Bundle Sampler Construction: a pair of algorithms $(\mathsf{Embed},\mathsf{Decode})$ such that $\mathsf{Embed}$, given sample access to a distribution $\mathcal{D}$ and a bit $b\in\{0,1\}$, output $x$ such that (a) the marginal distribution of $x$ is precisely $\mathcal{D}$; and (b) $\mathsf{Decode}(x) = b$ with probability that grows with the min-entropy of $\mathcal{D}$.
  • Figure 2: Our Pseudorandom Noise-Resilient Key Exchange Protocol to agree on a single key bit, tolerating a constant channel noise rate. To agree on a long $\lambda$-bit key, execute this protocol $\lambda$ times in parallel.

Theorems & Definitions (65)

  • Lemma 2.1: Hoeffding's Bound
  • Lemma 2.2: Piling Up Lemma
  • Lemma 2.3
  • proof
  • Definition 2.4: Computational Indistinguishability
  • Definition 2.5: Strong seeded extractor
  • Lemma 2.6: Average Bias of Extractor Output is Small
  • proof
  • Lemma 2.7: Leftover Hash Lemma impagliazzo1989pseudovadhan2012pseudorandomness
  • Lemma 2.8: Short-seed strong extractors exist lu2003extractorsvadhan2012pseudorandomness
  • ...and 55 more