Table of Contents
Fetching ...

Diagnosing and Repairing Unsafe Channels in Vision-Language Models via Causal Discovery and Dual-Modal Safety Subspace Projection

Jinhu Fu, Yihang Lou, Qingyi Si, Shudong Zhang, Yan Bai, Sen Su

Abstract

Large Vision-Language Models (LVLMs) have achieved impressive performance across multimodal understanding and reasoning tasks, yet their internal safety mechanisms remain opaque and poorly controlled. In this work, we present a comprehensive framework for diagnosing and repairing unsafe channels within LVLMs (CARE). We first perform causal mediation analysis to identify neurons and layers that are causally responsible for unsafe behaviors. Based on these findings, we introduce a dual-modal safety subspace projection method that learns generalized safety subspaces for both visual and textual modalities through generalized eigen-decomposition between benign and malicious activations. During inference, activations are dynamically projected toward these safety subspaces via a hybrid fusion mechanism that adaptively balances visual and textual corrections, effectively suppressing unsafe features while preserving semantic fidelity. Extensive experiments on multiple safety benchmarks demonstrate that our causal-subspace repair framework significantly enhances safety robustness without degrading general multimodal capabilities, outperforming prior activation steering and alignment-based baselines. Additionally, our method exhibits good transferability, defending against unseen attacks.

Diagnosing and Repairing Unsafe Channels in Vision-Language Models via Causal Discovery and Dual-Modal Safety Subspace Projection

Abstract

Large Vision-Language Models (LVLMs) have achieved impressive performance across multimodal understanding and reasoning tasks, yet their internal safety mechanisms remain opaque and poorly controlled. In this work, we present a comprehensive framework for diagnosing and repairing unsafe channels within LVLMs (CARE). We first perform causal mediation analysis to identify neurons and layers that are causally responsible for unsafe behaviors. Based on these findings, we introduce a dual-modal safety subspace projection method that learns generalized safety subspaces for both visual and textual modalities through generalized eigen-decomposition between benign and malicious activations. During inference, activations are dynamically projected toward these safety subspaces via a hybrid fusion mechanism that adaptively balances visual and textual corrections, effectively suppressing unsafe features while preserving semantic fidelity. Extensive experiments on multiple safety benchmarks demonstrate that our causal-subspace repair framework significantly enhances safety robustness without degrading general multimodal capabilities, outperforming prior activation steering and alignment-based baselines. Additionally, our method exhibits good transferability, defending against unseen attacks.

Paper Structure

This paper contains 43 sections, 5 theorems, 46 equations, 15 figures, 7 tables.

Key Result

Theorem 8.1

Let $\textbf{C}_b$ and $\textbf{C}_m$ be the covariance matrices of centered benign and malicious activations, respectively. The generalized eigenvalue problem identifies the directions $\textbf{u}_i$ that maximize the ratio of malicious-to-benign variance:

Figures (15)

  • Figure 1: We introduce the diagnosing-and-repairing framework for VLM safety. By precisely identifying safety-critical components, our method avoids disrupting unrelated model abilities. Leveraging both visual and textual token attribution, we construct a dual-modal safety subspace and project activations onto its safe direction, enabling targeted, training-free correction.
  • Figure 2: Causal tracing of security components through layered blocking methods.
  • Figure 3: Quantifying the security differentiation capability of different layers through clustering metrics.
  • Figure 4: Changes in ASR when blocking FFN and MHSA.
  • Figure 5: Comparison of Pairwise Correlations between MHSA and FFN.
  • ...and 10 more figures

Theorems & Definitions (10)

  • Theorem 8.1: Malicious Subspace Identification
  • proof
  • Theorem 8.2: Malicious Component Suppression
  • proof
  • Theorem 8.3: Cross-Modal Relevance Measure
  • proof
  • Theorem 8.4: Optimal Modality Weighting
  • proof
  • Corollary 8.5: Safety Convergence
  • proof