Table of Contents
Fetching ...

Protecting User Prompts Via Character-Level Differential Privacy

Shashie Dilhara Batan Arachchige, Hassan Jameel Asghar, Benjamin Zi Hao Zhao, Dinusha Vatsalan, Dali Kaafar

Abstract

Large Language Models (LLMs) generate responses based on user prompts. Often, these prompts may contain highly sensitive information, including personally identifiable information (PII), which could be exposed to third parties hosting these models. In this work, we propose a new method to sanitize user prompts. Our mechanism uses the randomized response mechanism of differential privacy to randomly and independently perturb each character in a word. The perturbed text is then sent to a remote LLM, which first performs a prompt restoration and subsequently performs the intended downstream task. The idea is that the restoration will be able to reconstruct non-sensitive words even when they are perturbed due to cues from the context, as well as the fact that these words are often very common. On the other hand, perturbation would make reconstruction of sensitive words difficult because they are rare. We experimentally validate our method on two datasets, i2b2/UTHealth and Enron, using two LLMs: Llama-3.1 8B Instruct and GPT-4o mini. We also compare our approach with a word-level differentially private mechanism, and with a rule-based PII redaction baseline, using a unified privacy-utility evaluation. Our results show that sensitive PII tagged in these datasets are reconstructed at a rate close to the theoretical rate of reconstructing completely random words, whereas non-sensitive words are reconstructed at a much higher rate. Our method has the advantage that it can be applied without explicitly identifying sensitive pieces of information in the prompt, while showing a good privacy-utility tradeoff for downstream tasks.

Protecting User Prompts Via Character-Level Differential Privacy

Abstract

Large Language Models (LLMs) generate responses based on user prompts. Often, these prompts may contain highly sensitive information, including personally identifiable information (PII), which could be exposed to third parties hosting these models. In this work, we propose a new method to sanitize user prompts. Our mechanism uses the randomized response mechanism of differential privacy to randomly and independently perturb each character in a word. The perturbed text is then sent to a remote LLM, which first performs a prompt restoration and subsequently performs the intended downstream task. The idea is that the restoration will be able to reconstruct non-sensitive words even when they are perturbed due to cues from the context, as well as the fact that these words are often very common. On the other hand, perturbation would make reconstruction of sensitive words difficult because they are rare. We experimentally validate our method on two datasets, i2b2/UTHealth and Enron, using two LLMs: Llama-3.1 8B Instruct and GPT-4o mini. We also compare our approach with a word-level differentially private mechanism, and with a rule-based PII redaction baseline, using a unified privacy-utility evaluation. Our results show that sensitive PII tagged in these datasets are reconstructed at a rate close to the theoretical rate of reconstructing completely random words, whereas non-sensitive words are reconstructed at a much higher rate. Our method has the advantage that it can be applied without explicitly identifying sensitive pieces of information in the prompt, while showing a good privacy-utility tradeoff for downstream tasks.

Paper Structure

This paper contains 30 sections, 2 theorems, 19 equations, 6 figures, 1 table, 1 algorithm.

Key Result

Theorem 3.2

If $\mathcal{M}$ is $\epsilon$-DP, then for any algorithm $\mathcal{M}'$, $\mathcal{M}' \circ \mathcal{M}$ is also $\epsilon$-DP.

Figures (6)

  • Figure 1: Proposed privacy-preserving prompt restoration pipeline with $k$-ary randomized response
  • Figure 2: The log-likelihood ratio and the probability that the original word $w$ is the same as the perturbed word $w^*$ for a word of 6 characters constructed randomly across different values of $\epsilon$.
  • Figure 3: Sensitive, non-sensitive terms reconstruction (%) by remote LLMs, theoretical reconstruction baseline $\Pr[\mathsf{T}_0]$ from Eq. \ref{['eq:total']}, and average semantic similarity (%) vs. $\epsilon$ for the i2b2/UTHealth and Enron datasets.
  • Figure 4: Analysis of name reconstruction on the i2b2/UTHealth dataset across different $\epsilon$ values.
  • Figure 5: Analysis of location reconstruction on the i2b2/UTHealth dataset across different $\epsilon$ values.
  • ...and 1 more figures

Theorems & Definitions (3)

  • Definition 3.1: Local Differential Privacy
  • Theorem 3.2: Post-processing Property dwork2006calibrating
  • Theorem 3.3: Sequential composition dwork2014algorithmic