Table of Contents
Fetching ...

Secure Forgetting: A Framework for Privacy-Driven Unlearning in Large Language Model (LLM)-Based Agents

Dayong Ye, Tainqing Zhu, Congcong Zhu, Feng He, Qi He, Shang Wang, Bo Liu, Wanlei Zhou

Abstract

Large language model (LLM)-based agents have recently gained considerable attention due to the powerful reasoning capabilities of LLMs. Existing research predominantly focuses on enhancing the task performance of these agents in diverse scenarios. However, as LLM-based agents become increasingly integrated into real-world applications, significant concerns emerge regarding their accumulation of sensitive or outdated knowledge. Addressing these concerns requires the development of mechanisms that allow agents to selectively forget previously learned knowledge, giving rise to a new term LLM-based agent unlearning. This paper initiates research on unlearning in LLM-based agents. Specifically, we propose a novel and comprehensive framework that categorizes unlearning scenarios into three contexts: state unlearning (forgetting specific states or items), trajectory unlearning (forgetting sequences of actions) and environment unlearning (forgetting entire environments or categories of tasks). Within this framework, we introduce a natural language-based unlearning method that trains a conversion model to transform high-level unlearning requests into actionable unlearning prompts, guiding agents through a controlled forgetting process. Moreover, to evaluate the robustness of the proposed framework, we introduce an unlearning inference adversary capable of crafting prompts, querying agents, and observing their behaviors in an attempt to infer the forgotten knowledge. Experimental results show that our approach effectively enables agents to forget targeted knowledge while preserving performance on untargeted tasks, and prevents the adversary from inferring the forgotten knowledge.

Secure Forgetting: A Framework for Privacy-Driven Unlearning in Large Language Model (LLM)-Based Agents

Abstract

Large language model (LLM)-based agents have recently gained considerable attention due to the powerful reasoning capabilities of LLMs. Existing research predominantly focuses on enhancing the task performance of these agents in diverse scenarios. However, as LLM-based agents become increasingly integrated into real-world applications, significant concerns emerge regarding their accumulation of sensitive or outdated knowledge. Addressing these concerns requires the development of mechanisms that allow agents to selectively forget previously learned knowledge, giving rise to a new term LLM-based agent unlearning. This paper initiates research on unlearning in LLM-based agents. Specifically, we propose a novel and comprehensive framework that categorizes unlearning scenarios into three contexts: state unlearning (forgetting specific states or items), trajectory unlearning (forgetting sequences of actions) and environment unlearning (forgetting entire environments or categories of tasks). Within this framework, we introduce a natural language-based unlearning method that trains a conversion model to transform high-level unlearning requests into actionable unlearning prompts, guiding agents through a controlled forgetting process. Moreover, to evaluate the robustness of the proposed framework, we introduce an unlearning inference adversary capable of crafting prompts, querying agents, and observing their behaviors in an attempt to infer the forgotten knowledge. Experimental results show that our approach effectively enables agents to forget targeted knowledge while preserving performance on untargeted tasks, and prevents the adversary from inferring the forgotten knowledge.

Paper Structure

This paper contains 18 sections, 4 theorems, 34 equations, 7 figures, 42 tables.

Key Result

Lemma 1

If $||\Delta\psi||$ is upper-bounded by $B$ almost surely, then $\mathcal{L}_{\mathcal{C}}$ is $L$-smooth with $L\leq\frac{\beta^2B^2}{4}$. $\blacktriangleleft$$\blacktriangleleft$

Figures (7)

  • Figure 1: Overview of an LLM-based agent. The agent $\textcircled{1}$ observes a state from the environment and $\textcircled{2}$ takes an action in response. The environment $\textcircled{3}$ returns a reward reflecting how well the action was performed. The agent $\textcircled{4}$ stores the state, action, and reward in its memory, $\textcircled{5}$ using this accumulated information to improve future performance.
  • Figure 2: The unlearning approach comprises four phases: ① crafting an unlearning request $x$ and providing it to the conversion model $\mathcal{C}$; ② generating the corresponding unlearning prompt $y$ and supplying it to the LLM; and ③ guiding the agent’s behavior toward the specified unlearning objective, with the resulting behavior used as feedback to refine $\mathcal{C}$ ④.
  • Figure 3: An example of Claude's limited unlearning capability, as evidenced by its execution of a forgotten task
  • Figure 4: An example of the GPT model's over-reasoning behavior, shown by its unintended avoidance of executing a task that was not meant to be forgotten
  • Figure 5: Unlearning performance of our method with other base models
  • ...and 2 more figures

Theorems & Definitions (8)

  • Lemma 1: Smoothness
  • Lemma 2: Convexity
  • Theorem 1: Convergence
  • Theorem 2
  • proof : Proof of Lemma \ref{['lem:smoothness']}
  • proof : Proof of Lemma \ref{['lem:convexity']}
  • proof : Proof of Theorem \ref{['thm:convergence']}
  • proof : Proof of Theorem \ref{['thm:KLBound']}