Table of Contents
Fetching ...

Dummy-Aware Weighted Attack (DAWA): Breaking the Safe Sink in Dummy Class Defenses

Yunrui Yu, Xuxiang Feng, Pengda Qin, Pengyang Wang, Kafeng Wang, Cheng-zhong Xu, Hang Su, Jun Zhu

Abstract

Adversarial robustness evaluation faces a critical challenge as new defense paradigms emerge that can exploit limitations in existing assessment methods. This paper reveals that Dummy Classes-based defenses, which introduce an additional "dummy" class as a safety sink for adversarial examples, achieve significantly overestimated robustness under conventional evaluation strategies like AutoAttack. The fundamental limitation stems from these attacks' singular focus on misleading the true class label, which aligns perfectly with the defense mechanism--successful attacks are simply captured by the dummy class. To address this gap, we propose Dummy-Aware Weighted Attack (DAWA), a novel evaluation method that simultaneously targets both the true label and dummy label with adaptive weighting during adversarial example synthesis. Extensive experiments demonstrate that DAWA effectively breaks this defense paradigm, reducing the measured robustness of a leading Dummy Classes-based defense from 58.61% to 29.52% on CIFAR-10 under l_infty perturbation (epsilon=8/255). Our work provides a more reliable benchmark for evaluating this emerging class of defenses and highlights the need for continuous evolution of robustness assessment methodologies.

Dummy-Aware Weighted Attack (DAWA): Breaking the Safe Sink in Dummy Class Defenses

Abstract

Adversarial robustness evaluation faces a critical challenge as new defense paradigms emerge that can exploit limitations in existing assessment methods. This paper reveals that Dummy Classes-based defenses, which introduce an additional "dummy" class as a safety sink for adversarial examples, achieve significantly overestimated robustness under conventional evaluation strategies like AutoAttack. The fundamental limitation stems from these attacks' singular focus on misleading the true class label, which aligns perfectly with the defense mechanism--successful attacks are simply captured by the dummy class. To address this gap, we propose Dummy-Aware Weighted Attack (DAWA), a novel evaluation method that simultaneously targets both the true label and dummy label with adaptive weighting during adversarial example synthesis. Extensive experiments demonstrate that DAWA effectively breaks this defense paradigm, reducing the measured robustness of a leading Dummy Classes-based defense from 58.61% to 29.52% on CIFAR-10 under l_infty perturbation (epsilon=8/255). Our work provides a more reliable benchmark for evaluating this emerging class of defenses and highlights the need for continuous evolution of robustness assessment methodologies.

Paper Structure

This paper contains 11 sections, 14 equations, 3 figures, 1 table, 1 algorithm.

Figures (3)

  • Figure 1: Attack effectiveness comparison on CIFAR-10 with $\ell_\infty$ constraint under PGD-AT+DUCAT defense wang2024new. Our DAWA (100 iterations, non-targeted) and DAWA$^{mt}$ (1,000 iterations, combined non-targeted/targeted) achieve $35.60\%$ and $26.42\%$ robust accuracy respectively, outperforming PGD ($60.64\%$), C&W ($71.72\%$), MIFPE ($63.10\%$), and AutoAttack ($56.80\%$).
  • Figure 2: Convergence speed comparison of four attacks (PGD, C&W, MIFPE, and DAWA) against PGD-AT+DUCAT defense wang2024new on CIFAR-10. The gray dashed line represents the AutoAttack robust accuracy (58.61%). The number annotation on the DAWA curve indicates the iteration (3) at which DAWA first surpasses AutoAttack's performance, demonstrating its faster convergence.
  • Figure 3: Ablation study on the hyperparameter $c$ (controlling $\alpha$ smoothing in the loss function) within the DAWA evaluation strategy. The plot illustrates the untargeted attack robustness (measured after 100 iterations) as a function of $\log_{10}(c)$ ranging from $10^{-1.0}$ to $10^{2.0}$, for models trained on CIFAR-10 with ResNet-18 under three strategies: PGD-AT + DUCAT, MART + DUCAT, and Consistency + DUCAT.