Table of Contents
Fetching ...

SafeAligner: Safety Alignment against Jailbreak Attacks via Response Disparity Guidance

Caishuang Huang, Wanxu Zhao, Rui Zheng, Huijie Lv, Wenyu Zhan, Shihan Dou, Sixian Li, Xiao Wang, Enyu Zhou, Junjie Ye, Yuming Yang, Tao Gui, Qi Zhang, Xuanjing Huang

TL;DR

SafeAligner tackles jailbreak vulnerabilities in LLMs by introducing a decoding-time safety mechanism that leverages the disparity between a safety-oriented Sentinel model and a riskier Intruder model. The method comprises data construction with opposite safety tendencies, LoRA-based fine-tuning of the two internal models, and a Response Difference Formula (RDF) that updates the external model's token probabilities during inference using a Response Difference Vector (RDV). Formally, $P_{RDV}^{(n)}(x|x_{<n-1}) = P_S^{(n)}(x|x_{<n-1}) - P_I^{(n)}(x|x_{<n-1})$ and $P_{RDF}^{(n)}(x|x_{<n-1})= (1-\alpha) P_E^{(n)}(x|x_{<n-1}) + \alpha P_{RDV}^{(n)}(x|x_{<n-1})$, with $P^{(n)}(x|x_{<n-1}) = \text{softmax}(P_{RDF}^{(n)}(x|x_{<n-1}))$. Empirical results across multiple open-source LLMs and jailbreak techniques demonstrate that SafeAligner increases the likelihood of safe tokens while reducing harmful ones, with only minimal sacrifice to general capabilities and modest time overhead, underscoring its practical, cost-effective defense potential. The work also provides a safety-alignment dataset and ablation analyses illustrating how model scale and the balancing parameter $\alpha$ influence the defense. Overall, SafeAligner offers a robust, decoding-time defense that generalizes across models and attack types while preserving utility.

Abstract

As the development of large language models (LLMs) rapidly advances, securing these models effectively without compromising their utility has become a pivotal area of research. However, current defense strategies against jailbreak attacks (i.e., efforts to bypass security protocols) often suffer from limited adaptability, restricted general capability, and high cost. To address these challenges, we introduce SafeAligner, a methodology implemented at the decoding stage to fortify defenses against jailbreak attacks. We begin by developing two specialized models: the Sentinel Model, which is trained to foster safety, and the Intruder Model, designed to generate riskier responses. SafeAligner leverages the disparity in security levels between the responses from these models to differentiate between harmful and beneficial tokens, effectively guiding the safety alignment by altering the output token distribution of the target model. Extensive experiments show that SafeAligner can increase the likelihood of beneficial tokens, while reducing the occurrence of harmful ones, thereby ensuring secure alignment with minimal loss to generality.

SafeAligner: Safety Alignment against Jailbreak Attacks via Response Disparity Guidance

TL;DR

SafeAligner tackles jailbreak vulnerabilities in LLMs by introducing a decoding-time safety mechanism that leverages the disparity between a safety-oriented Sentinel model and a riskier Intruder model. The method comprises data construction with opposite safety tendencies, LoRA-based fine-tuning of the two internal models, and a Response Difference Formula (RDF) that updates the external model's token probabilities during inference using a Response Difference Vector (RDV). Formally, and , with . Empirical results across multiple open-source LLMs and jailbreak techniques demonstrate that SafeAligner increases the likelihood of safe tokens while reducing harmful ones, with only minimal sacrifice to general capabilities and modest time overhead, underscoring its practical, cost-effective defense potential. The work also provides a safety-alignment dataset and ablation analyses illustrating how model scale and the balancing parameter influence the defense. Overall, SafeAligner offers a robust, decoding-time defense that generalizes across models and attack types while preserving utility.

Abstract

As the development of large language models (LLMs) rapidly advances, securing these models effectively without compromising their utility has become a pivotal area of research. However, current defense strategies against jailbreak attacks (i.e., efforts to bypass security protocols) often suffer from limited adaptability, restricted general capability, and high cost. To address these challenges, we introduce SafeAligner, a methodology implemented at the decoding stage to fortify defenses against jailbreak attacks. We begin by developing two specialized models: the Sentinel Model, which is trained to foster safety, and the Intruder Model, designed to generate riskier responses. SafeAligner leverages the disparity in security levels between the responses from these models to differentiate between harmful and beneficial tokens, effectively guiding the safety alignment by altering the output token distribution of the target model. Extensive experiments show that SafeAligner can increase the likelihood of beneficial tokens, while reducing the occurrence of harmful ones, thereby ensuring secure alignment with minimal loss to generality.

Paper Structure

This paper contains 31 sections, 3 equations, 4 figures, 8 tables.

Figures (4)

  • Figure 1: Overview of SafeAligner.
  • Figure 2: Analysis of parametric $\alpha$ ablation of external models at different scales on Qwen1.5-Chat. The internal model was fixed at 0.5B, the external model size was increased sequentially from 0.5B to 7B, and $\alpha$ was increased from 0.3 to 0.8, where $\alpha = 0$ is equivalent to using the external model directly. The safety score and general score are normalized.
  • Figure 3: Internal model-scale ablation analysis on Qwen1.5-Chat. The external model is fixed at 7B, with the internal model size increasing sequentially from 0.5B to 7B. We set $\alpha$ to 0.6 for all scales.
  • Figure 4: External model-scale ablation analysis on Qwen1.5-Chat. The internal model is fixed at 0.5B, with the external model size increasing sequentially from 0.5B to 7B. We set $\alpha$ to 0.6 for all scales.