Table of Contents
Fetching ...

Machine Unlearning with Minimal Gradient Dependence for High Unlearning Ratios

Tao Huang, Ziyang Chen, Jiayang Meng, Qingyu Huang, Xu Yang, Xun Yi, Ibrahim Khalil

TL;DR

This paper introduces Mini-Unlearning, a novel approach that capitalizes on a critical observation: unlearned parameters correlate with retrained parameters through contraction mapping and leverages this contraction mapping to facilitate scalable, efficient unlearning.

Abstract

In the context of machine unlearning, the primary challenge lies in effectively removing traces of private data from trained models while maintaining model performance and security against privacy attacks like membership inference attacks. Traditional gradient-based unlearning methods often rely on extensive historical gradients, which becomes impractical with high unlearning ratios and may reduce the effectiveness of unlearning. Addressing these limitations, we introduce Mini-Unlearning, a novel approach that capitalizes on a critical observation: unlearned parameters correlate with retrained parameters through contraction mapping. Our method, Mini-Unlearning, utilizes a minimal subset of historical gradients and leverages this contraction mapping to facilitate scalable, efficient unlearning. This lightweight, scalable method significantly enhances model accuracy and strengthens resistance to membership inference attacks. Our experiments demonstrate that Mini-Unlearning not only works under higher unlearning ratios but also outperforms existing techniques in both accuracy and security, offering a promising solution for applications requiring robust unlearning capabilities.

Machine Unlearning with Minimal Gradient Dependence for High Unlearning Ratios

TL;DR

This paper introduces Mini-Unlearning, a novel approach that capitalizes on a critical observation: unlearned parameters correlate with retrained parameters through contraction mapping and leverages this contraction mapping to facilitate scalable, efficient unlearning.

Abstract

In the context of machine unlearning, the primary challenge lies in effectively removing traces of private data from trained models while maintaining model performance and security against privacy attacks like membership inference attacks. Traditional gradient-based unlearning methods often rely on extensive historical gradients, which becomes impractical with high unlearning ratios and may reduce the effectiveness of unlearning. Addressing these limitations, we introduce Mini-Unlearning, a novel approach that capitalizes on a critical observation: unlearned parameters correlate with retrained parameters through contraction mapping. Our method, Mini-Unlearning, utilizes a minimal subset of historical gradients and leverages this contraction mapping to facilitate scalable, efficient unlearning. This lightweight, scalable method significantly enhances model accuracy and strengthens resistance to membership inference attacks. Our experiments demonstrate that Mini-Unlearning not only works under higher unlearning ratios but also outperforms existing techniques in both accuracy and security, offering a promising solution for applications requiring robust unlearning capabilities.

Paper Structure

This paper contains 24 sections, 1 theorem, 17 equations, 3 tables, 4 algorithms.

Key Result

Theorem 1

Suppose $F(\mathbf{w})$ is $\mu$-strongly convex and $L$-smoothness, the approximation error via Eq.(eq10) is $o\left(r^k\right)$ where $r = \text{max}\{ \|1-\eta \cdot \mu \|, \|1-\eta \cdot L \|\} \in (0,1)$.

Theorems & Definitions (2)

  • Theorem 1
  • proof : Proof of Theorem 1