Safety Alignment of Large Language Models via Contrasting Safe and Harmful Distributions
Xiaoyun Zhang, Zhengyue Zhao, Wenxuan Shi, Kaidi Xu, Di Huang, Xing Hu
TL;DR
Safety alignment of LLMs is essential but training-based methods (RLHF, instruction fine-tuning) are costly and may degrade safety after downstream updates. The authors propose Adversarial Contrastive Decoding (ACD), a prompt-based, training-light framework that learns two opposing soft system prompts (Safeguarding Prompt and Adversarial Prompt) via Opposite Prompt Optimization using a small anchor dataset. In inference, ACD performs contrastive decoding by combining the safe and adversarial logits to steer outputs toward safety without sacrificing generation quality. Across diverse models and red-teaming benchmarks, ACD achieves substantial safety gains, outperforms instruction-based decoding baselines, and remains effective on RLHF-tuned LLMs. The method offers a practical, scalable approach to safety alignment with modest computational overhead.
Abstract
With the widespread application of Large Language Models (LLMs), it has become a significant concern to ensure their safety and prevent harmful responses. While current safe-alignment methods based on instruction fine-tuning and Reinforcement Learning from Human Feedback (RLHF) can effectively reduce harmful responses from LLMs, they often require high-quality datasets and heavy computational overhead during model training. Another way to align language models is to modify the logit of tokens in model outputs without heavy training. Recent studies have shown that contrastive decoding can enhance the performance of language models by reducing the likelihood of confused tokens. However, these methods require the manual selection of contrastive models or instruction templates, limiting the degree of contrast. To this end, we propose Adversarial Contrastive Decoding (ACD), an optimization-based framework to generate two opposite soft system prompts, the Safeguarding Prompt (SP) and the Adversarial Prompt (AP), for prompt-based contrastive decoding. The SP aims to promote safer outputs while the AP aims to exploit the harmful parts of the model, providing a strong contrast to align the model with safety. ACD only needs to apply a lightweight prompt tuning on a rather small anchor dataset without training the target model. Experiments conducted on extensive models and benchmarks demonstrate that the proposed method achieves much better safety performance than previous model training-free decoding methods without sacrificing its original generation ability.
