Table of Contents
Fetching ...

Position: Explainable AI is Causality in Disguise

Amir-Hossein Karimi

Abstract

The demand for Explainable AI (XAI) has triggered an explosion of methods, producing a landscape so fragmented that we now rely on surveys of surveys. Yet, fundamental challenges persist: conflicting metrics, failed sanity checks, and unresolved debates over robustness and fairness. The only consensus on how to achieve explainability is a lack of one. This has led many to point to the absence of a ground truth for defining ``the'' correct explanation as the main culprit. This position paper posits that the persistent discord in XAI arises not from an absent ground truth but from a ground truth that exists, albeit as an elusive and challenging target: the causal model that governs the relevant system. By reframing XAI queries about data, models, or decisions as causal inquiries, we prove the necessity and sufficiency of causal models for XAI. We contend that without this causal grounding, XAI remains unmoored. Ultimately, we encourage the community to converge around advanced concept and causal discovery to escape this entrenched uncertainty.

Position: Explainable AI is Causality in Disguise

Abstract

The demand for Explainable AI (XAI) has triggered an explosion of methods, producing a landscape so fragmented that we now rely on surveys of surveys. Yet, fundamental challenges persist: conflicting metrics, failed sanity checks, and unresolved debates over robustness and fairness. The only consensus on how to achieve explainability is a lack of one. This has led many to point to the absence of a ground truth for defining ``the'' correct explanation as the main culprit. This position paper posits that the persistent discord in XAI arises not from an absent ground truth but from a ground truth that exists, albeit as an elusive and challenging target: the causal model that governs the relevant system. By reframing XAI queries about data, models, or decisions as causal inquiries, we prove the necessity and sufficiency of causal models for XAI. We contend that without this causal grounding, XAI remains unmoored. Ultimately, we encourage the community to converge around advanced concept and causal discovery to escape this entrenched uncertainty.

Paper Structure

This paper contains 31 sections, 4 theorems, 3 equations, 1 figure, 1 table.

Key Result

Theorem 4.2

Let $\mathcal{M} = \langle \mathbf{U}, \mathbf{V}, \mathbf{F}, P(\mathbf{U}) \rangle$ be the unique true Structural Causal Model of the data-generating process. Under standard assumptions (acyclicity, no unmeasured confounders, well-defined exogenous variables), having full access to $\mathcal{M}$ i

Figures (1)

  • Figure 1: Core methods in XAI for explaining an ML model ($f : X \rightarrow Y$) are categorized by purpose into data-based, model-based, and decision-based questions. By mapping these directly onto Pearl's Ladder of Causation, we reveal that solving XAI fundamentally requires answering causal inquiries.

Theorems & Definitions (9)

  • Definition 3.1: Structural Causal Model (SCM)
  • Definition 3.2: Causal Graph
  • Definition 3.3: Observ., Interv., and Counterf. Queries
  • Definition 3.4: Causal Discovery
  • Definition 4.1: Accurate and Complete Answers to Q1-6
  • Theorem 4.2: Sufficiency of the True SCM for XAI
  • Theorem 4.4: Necessity of the True SCM for XAI
  • Theorem \ref{thm:sufficiency}: Sufficiency of the True SCM for XAI
  • Theorem \ref{thm:necessity}: Necessity of the True SCM for XAI