Table of Contents
Fetching ...

Safety Alignment of Large Language Models via Contrasting Safe and Harmful Distributions

Xiaoyun Zhang, Zhengyue Zhao, Wenxuan Shi, Kaidi Xu, Di Huang, Xing Hu

TL;DR

Safety alignment of LLMs is essential but training-based methods (RLHF, instruction fine-tuning) are costly and may degrade safety after downstream updates. The authors propose Adversarial Contrastive Decoding (ACD), a prompt-based, training-light framework that learns two opposing soft system prompts (Safeguarding Prompt and Adversarial Prompt) via Opposite Prompt Optimization using a small anchor dataset. In inference, ACD performs contrastive decoding by combining the safe and adversarial logits to steer outputs toward safety without sacrificing generation quality. Across diverse models and red-teaming benchmarks, ACD achieves substantial safety gains, outperforms instruction-based decoding baselines, and remains effective on RLHF-tuned LLMs. The method offers a practical, scalable approach to safety alignment with modest computational overhead.

Abstract

With the widespread application of Large Language Models (LLMs), it has become a significant concern to ensure their safety and prevent harmful responses. While current safe-alignment methods based on instruction fine-tuning and Reinforcement Learning from Human Feedback (RLHF) can effectively reduce harmful responses from LLMs, they often require high-quality datasets and heavy computational overhead during model training. Another way to align language models is to modify the logit of tokens in model outputs without heavy training. Recent studies have shown that contrastive decoding can enhance the performance of language models by reducing the likelihood of confused tokens. However, these methods require the manual selection of contrastive models or instruction templates, limiting the degree of contrast. To this end, we propose Adversarial Contrastive Decoding (ACD), an optimization-based framework to generate two opposite soft system prompts, the Safeguarding Prompt (SP) and the Adversarial Prompt (AP), for prompt-based contrastive decoding. The SP aims to promote safer outputs while the AP aims to exploit the harmful parts of the model, providing a strong contrast to align the model with safety. ACD only needs to apply a lightweight prompt tuning on a rather small anchor dataset without training the target model. Experiments conducted on extensive models and benchmarks demonstrate that the proposed method achieves much better safety performance than previous model training-free decoding methods without sacrificing its original generation ability.

Safety Alignment of Large Language Models via Contrasting Safe and Harmful Distributions

TL;DR

Safety alignment of LLMs is essential but training-based methods (RLHF, instruction fine-tuning) are costly and may degrade safety after downstream updates. The authors propose Adversarial Contrastive Decoding (ACD), a prompt-based, training-light framework that learns two opposing soft system prompts (Safeguarding Prompt and Adversarial Prompt) via Opposite Prompt Optimization using a small anchor dataset. In inference, ACD performs contrastive decoding by combining the safe and adversarial logits to steer outputs toward safety without sacrificing generation quality. Across diverse models and red-teaming benchmarks, ACD achieves substantial safety gains, outperforms instruction-based decoding baselines, and remains effective on RLHF-tuned LLMs. The method offers a practical, scalable approach to safety alignment with modest computational overhead.

Abstract

With the widespread application of Large Language Models (LLMs), it has become a significant concern to ensure their safety and prevent harmful responses. While current safe-alignment methods based on instruction fine-tuning and Reinforcement Learning from Human Feedback (RLHF) can effectively reduce harmful responses from LLMs, they often require high-quality datasets and heavy computational overhead during model training. Another way to align language models is to modify the logit of tokens in model outputs without heavy training. Recent studies have shown that contrastive decoding can enhance the performance of language models by reducing the likelihood of confused tokens. However, these methods require the manual selection of contrastive models or instruction templates, limiting the degree of contrast. To this end, we propose Adversarial Contrastive Decoding (ACD), an optimization-based framework to generate two opposite soft system prompts, the Safeguarding Prompt (SP) and the Adversarial Prompt (AP), for prompt-based contrastive decoding. The SP aims to promote safer outputs while the AP aims to exploit the harmful parts of the model, providing a strong contrast to align the model with safety. ACD only needs to apply a lightweight prompt tuning on a rather small anchor dataset without training the target model. Experiments conducted on extensive models and benchmarks demonstrate that the proposed method achieves much better safety performance than previous model training-free decoding methods without sacrificing its original generation ability.

Paper Structure

This paper contains 44 sections, 12 equations, 5 figures, 19 tables.

Figures (5)

  • Figure 1: Comparison of (a) decoding with manual safe prompt; (b) decoding with opposite prompt Instructive Decoding and (c) decoding with Adversarial Contrastive Decoding.
  • Figure 2: Framework of Opposite Prompt Optimization. The Safeguarding Prompt is initialized with a manual safe prompt, and then its embedding is optimized with $\mathcal{L}_\textbf{SP}$ given by \ref{['eq:loss_sp']}. Similarly, the Adversarial Prompt is optimized with $\mathcal{L}_\textbf{AP}$ given by \ref{['eq:loss_ap']}.
  • Figure 3: Framework of Prompt-based Contrastive Decoding.
  • Figure 4: HLR of Llama-2-uncensored-7b and Llama-3-uncensored-8b with different prompts on three benchmarks.
  • Figure 5: HLR of Llama-2-uncensored-7b and Llama-3-uncensored-8b with different $\alpha$ ACD on three benchmarks.