Table of Contents
Fetching ...

Automated Adversarial Discovery for Safety Classifiers

Yash Kumar Lal, Preethi Lahoti, Aradhana Sinha, Yao Qin, Ananth Balashankar

TL;DR

The task of automated adversarial discovery for safety classifiers is formalized - to find new attacks along previously unseen harm dimensions that expose new weaknesses in the classifier.

Abstract

Safety classifiers are critical in mitigating toxicity on online forums such as social media and in chatbots. Still, they continue to be vulnerable to emergent, and often innumerable, adversarial attacks. Traditional automated adversarial data generation methods, however, tend to produce attacks that are not diverse, but variations of previously observed harm types. We formalize the task of automated adversarial discovery for safety classifiers - to find new attacks along previously unseen harm dimensions that expose new weaknesses in the classifier. We measure progress on this task along two key axes (1) adversarial success: does the attack fool the classifier? and (2) dimensional diversity: does the attack represent a previously unseen harm type? Our evaluation of existing attack generation methods on the CivilComments toxicity task reveals their limitations: Word perturbation attacks fail to fool classifiers, while prompt-based LLM attacks have more adversarial success, but lack dimensional diversity. Even our best-performing prompt-based method finds new successful attacks on unseen harm dimensions of attacks only 5\% of the time. Automatically finding new harmful dimensions of attack is crucial and there is substantial headroom for future research on our new task.

Automated Adversarial Discovery for Safety Classifiers

TL;DR

The task of automated adversarial discovery for safety classifiers is formalized - to find new attacks along previously unseen harm dimensions that expose new weaknesses in the classifier.

Abstract

Safety classifiers are critical in mitigating toxicity on online forums such as social media and in chatbots. Still, they continue to be vulnerable to emergent, and often innumerable, adversarial attacks. Traditional automated adversarial data generation methods, however, tend to produce attacks that are not diverse, but variations of previously observed harm types. We formalize the task of automated adversarial discovery for safety classifiers - to find new attacks along previously unseen harm dimensions that expose new weaknesses in the classifier. We measure progress on this task along two key axes (1) adversarial success: does the attack fool the classifier? and (2) dimensional diversity: does the attack represent a previously unseen harm type? Our evaluation of existing attack generation methods on the CivilComments toxicity task reveals their limitations: Word perturbation attacks fail to fool classifiers, while prompt-based LLM attacks have more adversarial success, but lack dimensional diversity. Even our best-performing prompt-based method finds new successful attacks on unseen harm dimensions of attacks only 5\% of the time. Automatically finding new harmful dimensions of attack is crucial and there is substantial headroom for future research on our new task.

Paper Structure

This paper contains 42 sections, 1 equation, 7 figures, 5 tables.

Figures (7)

  • Figure 1: For a given user comment, the WordNet approach probabilistically replaces words in the comment with its synonym from WordNet. Polyjuice uses GPT-2 to rewrite the user comment by incorporating various counterfactual types such as phrase swaps in a way that the parse tree of the comment is not altered. Our method, Discover-Adapt, aims to generate adversarial examples that may also contain new toxicity types either by leveraging latent unlabeled dimensions present in the seed comment, or drawing from the LLM priors. Using this discovered unlabeled dimension, we adapt the input user comment to add an unseen dimension of toxicity. In this example, Discover-Adapt transforms an insult to an identity attack, which is the unseen labeled dimension. Our analysis shows that such successful attacks are hard to generate ($\sim5\%$), and identifies areas of improvement.
  • Figure 2: Examples of user comments in the CivilComments dataset that are annotated with different labeled dimensions of toxicity.
  • Figure 3: Given a seed user comment, we first discover unlabeled dimensions of toxicity, either by prompting an LLM to gauge it from the comment itself (in-seed) or by querying its priors for top unlabeled dimensions that would be present in a comment forum (constitutional). Next, we prompt the LLM to transform the user comment by leveraging that unlabeled dimension in a way that makes it harder for the toxicity to be detected.
  • Figure 4: We present an example of a successful attack that contains a held-out dimension (identity attack) as well as two common failure modes of Discover-Adapt.
  • Figure 5: PaLM2 prompts for different baselines, and methods of discovering new toxicity subtypes to adapt to.
  • ...and 2 more figures