Table of Contents
Fetching ...

COMPASS: Context-Modulated PID Attention Steering System for Hallucination Mitigation

Kenji Sahay, Snigdha Pandya, Rohan Nagale, Anna Lin, Shikhar Shiromani, Kevin Zhu, Dev Sunishchal

TL;DR

COMPASS tackles contextual hallucinations in LLMs by embedding a real-time feedback loop inside decoding that modulates attention heads based on a Context Reliance Score and a token-level hallucination detector. The system uses a decoding-time, pre-softmax context bias on selected heads guided by a PID controller, enabling groundings in evidence without retraining or reruns. Across multiple models and benchmarks, COMPASS yields 2.8%–5.8% absolute reductions in hallucinations and improves grounding metrics such as context overlap, with larger models benefiting more. The work demonstrates that closed-loop attention control is a promising, interpretable, and low-overhead approach to enhancing factual fidelity in generation.

Abstract

Large language models (LLMs) often generate fluent but factually incorrect statements despite having access to relevant evidence, a failure mode rooted in how they allocate attention between contextual and parametric knowledge. Understanding and steering this internal behavior is key both for trustworthy deployment and for scientific interpretability of model mechanisms. We introduce COMPASS (Context-Modulated PID Attention Steering System), a lightweight, interpretable control framework that embeds a model-based feedback loop directly within decoding. COMPASS quantifies context reliance via a transparent metric, the Context Reliance Score (CRS), which serves as an online probe of how attention heads ground generation in evidence. Using this interpretable signal, a PID controller dynamically modulates attention heads to maintain factual consistency without retraining or multi-pass decoding. Across benchmarks (HotpotQA, XSum, HaluEval, RAGTruth), COMPASS consistently reduces contextual hallucination rates (2.8 to 5.8 percent absolute) while revealing how distinct attention heads contribute to evidence alignment. These results highlight feedback-driven interpretability as a pathway toward scientific understanding of LLM behavior.

COMPASS: Context-Modulated PID Attention Steering System for Hallucination Mitigation

TL;DR

COMPASS tackles contextual hallucinations in LLMs by embedding a real-time feedback loop inside decoding that modulates attention heads based on a Context Reliance Score and a token-level hallucination detector. The system uses a decoding-time, pre-softmax context bias on selected heads guided by a PID controller, enabling groundings in evidence without retraining or reruns. Across multiple models and benchmarks, COMPASS yields 2.8%–5.8% absolute reductions in hallucinations and improves grounding metrics such as context overlap, with larger models benefiting more. The work demonstrates that closed-loop attention control is a promising, interpretable, and low-overhead approach to enhancing factual fidelity in generation.

Abstract

Large language models (LLMs) often generate fluent but factually incorrect statements despite having access to relevant evidence, a failure mode rooted in how they allocate attention between contextual and parametric knowledge. Understanding and steering this internal behavior is key both for trustworthy deployment and for scientific interpretability of model mechanisms. We introduce COMPASS (Context-Modulated PID Attention Steering System), a lightweight, interpretable control framework that embeds a model-based feedback loop directly within decoding. COMPASS quantifies context reliance via a transparent metric, the Context Reliance Score (CRS), which serves as an online probe of how attention heads ground generation in evidence. Using this interpretable signal, a PID controller dynamically modulates attention heads to maintain factual consistency without retraining or multi-pass decoding. Across benchmarks (HotpotQA, XSum, HaluEval, RAGTruth), COMPASS consistently reduces contextual hallucination rates (2.8 to 5.8 percent absolute) while revealing how distinct attention heads contribute to evidence alignment. These results highlight feedback-driven interpretability as a pathway toward scientific understanding of LLM behavior.

Paper Structure

This paper contains 27 sections, 7 equations, 1 figure, 2 tables, 2 algorithms.

Figures (1)

  • Figure 1: Single-stream control loop and inputs.