Table of Contents
Fetching ...

Robust Multimodal Safety via Conditional Decoding

Anurag Kumar, Raghuveer Peri, Jon Burnsky, Alexandru Nelus, Rohit Paturi, Srikanth Vishnubhotla, Yanjun Qi

Abstract

Multimodal large-language models (MLLMs) often experience degraded safety alignment when harmful queries exploit cross-modal interactions. Models aligned on text alone show a higher rate of successful attacks when extended to two or more modalities. In this work, we propose a simple conditional decoding strategy, CASA (Classification Augmented with Safety Attention) that utilizes internal representations of MLLMs to predict a binary safety token before response generation. We introduce a novel safety attention module designed to enhance the model's ability to detect malicious queries. Our design ensures robust safety alignment without relying on any external classifier or auxiliary head, and without the need for modality-specific safety fine-tuning. On diverse benchmarks such as MM-SafetyBench, JailbreakV-28k, and adversarial audio tests, CASA lowers the average attack success rate by more than 97% across modalities and across attack types. Our empirical evaluations also show that CASA maintains strong utility in benign inputs, a result validated through both automated and human evaluations (via 13 trained annotators). Together, these results highlight CASA as a simple and generalizable framework to improve multimodal LLM safety.

Robust Multimodal Safety via Conditional Decoding

Abstract

Multimodal large-language models (MLLMs) often experience degraded safety alignment when harmful queries exploit cross-modal interactions. Models aligned on text alone show a higher rate of successful attacks when extended to two or more modalities. In this work, we propose a simple conditional decoding strategy, CASA (Classification Augmented with Safety Attention) that utilizes internal representations of MLLMs to predict a binary safety token before response generation. We introduce a novel safety attention module designed to enhance the model's ability to detect malicious queries. Our design ensures robust safety alignment without relying on any external classifier or auxiliary head, and without the need for modality-specific safety fine-tuning. On diverse benchmarks such as MM-SafetyBench, JailbreakV-28k, and adversarial audio tests, CASA lowers the average attack success rate by more than 97% across modalities and across attack types. Our empirical evaluations also show that CASA maintains strong utility in benign inputs, a result validated through both automated and human evaluations (via 13 trained annotators). Together, these results highlight CASA as a simple and generalizable framework to improve multimodal LLM safety.

Paper Structure

This paper contains 31 sections, 6 equations, 5 figures, 8 tables.

Figures (5)

  • Figure 1: The plot visualizes the top two components of the PCA reduction done on the last hidden layer features obtained from the pre-trained and safety-aligned Qwen_2.5_Omni (3B) model. The safe and unsafe inputs were taken from the Harm-questions test set.
  • Figure 2: Overall architecture of CASA. LLM is trained to produce safety token before response generation. Temporally aggregated cross attention scores, $v_s$ computed between the prompt hidden states ($h_p$) and query embedding $E_s^q$ (derived from the frozen pretrained model) are used to scale the safety token logit values at timestep $t_{safety}$. B refers to the batch size during decoding.
  • Figure 3: Utility on utility-text dataset. (a) % responses determined by LLMaJ to be similar or better than pre-trained model. CASA is the best performing model along 3/5 dimensions with competitive utility on others. (b) Human preferences comparing pre-trained and CASA models' responses. CASA has a higher or equal preference compared to pre-trained model.
  • Figure 4: The learned attention values $v_s$ predicting the harmfulness of a query given benign and harmful queries from the safety attention layer during training. We observe that the values eventually approach 1 for harmful queries, whereas they approach 0 for benign queries.
  • Figure 5: Qualitative examples showcasing the effectiveness of the proposed method in blocking harmfu l queries with text embedded within the image (from MM-SB dataset)