Table of Contents
Fetching ...

EnsembleSHAP: Faithful and Certifiably Robust Attribution for Random Subspace Method

Yanting Wang, Jinyuan Jia

Abstract

Random subspace method has wide security applications such as providing certified defenses against adversarial and backdoor attacks, and building robustly aligned LLM against jailbreaking attacks. However, the explanation of random subspace method lacks sufficient exploration. Existing state-of-the-art feature attribution methods, such as Shapley value and LIME, are computationally impractical and lacks security guarantee when applied to random subspace method. In this work, we propose EnsembleSHAP, an intrinsically faithful and secure feature attribution for random subspace method that reuses its computational byproducts. Specifically, our feature attribution method is 1) computationally efficient, 2) maintains essential properties of effective feature attribution (such as local accuracy), and 3) offers guaranteed protection against privacy-preserving attacks on feature attribution methods. To the best of our knowledge, this is the first work to establish provable robustness against explanation-preserving attacks. We also perform comprehensive evaluations for our explanation's effectiveness when faced with different empirical attacks, including backdoor attacks, adversarial attacks, and jailbreak attacks. The code is at https://github.com/Wang-Yanting/EnsembleSHAP. WARNING: This document may include content that could be considered harmful.

EnsembleSHAP: Faithful and Certifiably Robust Attribution for Random Subspace Method

Abstract

Random subspace method has wide security applications such as providing certified defenses against adversarial and backdoor attacks, and building robustly aligned LLM against jailbreaking attacks. However, the explanation of random subspace method lacks sufficient exploration. Existing state-of-the-art feature attribution methods, such as Shapley value and LIME, are computationally impractical and lacks security guarantee when applied to random subspace method. In this work, we propose EnsembleSHAP, an intrinsically faithful and secure feature attribution for random subspace method that reuses its computational byproducts. Specifically, our feature attribution method is 1) computationally efficient, 2) maintains essential properties of effective feature attribution (such as local accuracy), and 3) offers guaranteed protection against privacy-preserving attacks on feature attribution methods. To the best of our knowledge, this is the first work to establish provable robustness against explanation-preserving attacks. We also perform comprehensive evaluations for our explanation's effectiveness when faced with different empirical attacks, including backdoor attacks, adversarial attacks, and jailbreak attacks. The code is at https://github.com/Wang-Yanting/EnsembleSHAP. WARNING: This document may include content that could be considered harmful.

Paper Structure

This paper contains 37 sections, 1 theorem, 32 equations, 18 figures, 11 tables.

Key Result

Theorem 1

Given a testing input $\bm{x}$ which is originally predicted as $\hat{y}$. We suppose there exists $\bm{x}'\in \mathcal{B}(\bm{x},T)$ such that $H(\bm{x}')\neq \hat{y}$. Then $\mathcal{D}(\bm{x}, T)$ is the solution of the following optimization problem: where $\Delta = {\underline{p}_{\hat{y}}(\bm{x},h,k)-\overline{p}_{\hat{y}'}(\bm{x},h,k)}$, $\underline{p}_{c}$ (or $\overline{p}_{c}$) represen

Figures (18)

  • Figure 1: Certified detection rate on text classification datasets. $T$ is the number of modified features, and $e$ is the number of reported most important features.
  • Figure 2: Affect of prediction confidence $\Delta$ and the number of features $d$.
  • Figure 3: Visualization of Shapley value's explanation on SST-2 dataset. The Shapley value is applied on the base model. The ground-truth key words are highlighted in bold.
  • Figure 4: Visualization of our explanation on SST-2 dataset. The ground-truth key words are highlighted in bold.
  • Figure 5: Visualization of Shapley value's explanation on AG-news dataset. The Shapley value is applied on the base model. The ground-truth key words are highlighted in bold.
  • ...and 13 more figures

Theorems & Definitions (6)

  • Theorem 1
  • proof
  • proof
  • proof
  • proof : Proof sketch
  • proof