Table of Contents
Fetching ...

Reliable Control-Point Selection for Steering Reasoning in Large Language Models

Haomin Zhuang, Hojun Yoo, Xiaonan Luo, Kehan Guo, Xiangliang Zhang

Abstract

Steering vectors offer a training-free mechanism for controlling reasoning behaviors in large language models, but constructing effective vectors requires identifying genuine behavioral signals in the model's hidden states. For behaviors that can be toggled via prompts, this is straightforward. However, many reasoning behaviors -- such as self-reflection -- emerge spontaneously and resist prompt-level control. Current methods detect these behaviors through keyword matching in chain-of-thought traces, implicitly assuming that every detected boundary encodes a genuine behavioral signal. We show that this assumption is overwhelmingly wrong: across 541 keyword-detected boundaries, 93.3\% are behaviorally unstable, failing to reproduce the detected behavior under re-generation from the same prefix. We develop a probabilistic model that formalizes intrinsic reasoning behaviors as stochastic events with context-dependent trigger probabilities, and show that unstable boundaries dilute the steering signal. Guided by this analysis, we propose stability filtering, which retains only boundaries where the model consistently reproduces the target behavior. Combined with a content-subspace projection that removes residual question-specific noise, our method achieves 0.784 accuracy on MATH-500 (+5.0 over the strongest baseline). The resulting steering vectors transfer across models in the same architecture family without re-extraction, improving Nemotron-Research-Reasoning-1.5B (+5.0) and DeepScaleR-1.5B-Preview (+6.0). Code is available at https://github.com/zhmzm/stability-steering.

Reliable Control-Point Selection for Steering Reasoning in Large Language Models

Abstract

Steering vectors offer a training-free mechanism for controlling reasoning behaviors in large language models, but constructing effective vectors requires identifying genuine behavioral signals in the model's hidden states. For behaviors that can be toggled via prompts, this is straightforward. However, many reasoning behaviors -- such as self-reflection -- emerge spontaneously and resist prompt-level control. Current methods detect these behaviors through keyword matching in chain-of-thought traces, implicitly assuming that every detected boundary encodes a genuine behavioral signal. We show that this assumption is overwhelmingly wrong: across 541 keyword-detected boundaries, 93.3\% are behaviorally unstable, failing to reproduce the detected behavior under re-generation from the same prefix. We develop a probabilistic model that formalizes intrinsic reasoning behaviors as stochastic events with context-dependent trigger probabilities, and show that unstable boundaries dilute the steering signal. Guided by this analysis, we propose stability filtering, which retains only boundaries where the model consistently reproduces the target behavior. Combined with a content-subspace projection that removes residual question-specific noise, our method achieves 0.784 accuracy on MATH-500 (+5.0 over the strongest baseline). The resulting steering vectors transfer across models in the same architecture family without re-extraction, improving Nemotron-Research-Reasoning-1.5B (+5.0) and DeepScaleR-1.5B-Preview (+6.0). Code is available at https://github.com/zhmzm/stability-steering.

Paper Structure

This paper contains 34 sections, 8 equations, 4 figures, 5 tables.

Figures (4)

  • Figure 1: Extrinsic behaviors are steered via prompt contrast (left). Intrinsic behaviors emerge spontaneously and resist prompting (middle-left). Keyword matching detects them from traces (middle-right), but most detections are unstable under re-generation (right; data from §\ref{['sec:stability_results']}).
  • Figure 2: Overview of our method. Left: SEAL mixes all keyword-detected boundaries, including unstable ones that dilute the signal. Middle: Our content-aware extraction computes per-question steering directions and projects out question-specific information via SVD. Right: Stability probing re-generates from each boundary's prefix and retains only those that reliably reproduce the target behavior.
  • Figure 3: (a) Distribution of stability scores across 541 R+T boundaries. 93.3% are unstable ($s(b) < 0.8$); only 6.7% pass the threshold. (b) Accuracy peaks at $\tau{=}0.8$ (+7.0 over SEAL), then drops at $\tau{=}0.9$ as sample size shrinks. Shaded band shows random baseline $\pm 1\sigma$.
  • Figure 4: Behavior probe confidence by stability bin (balanced R+T vs. E, GroupKFold by question). Stable boundaries ($s \geq 0.8$) yield the highest confidence (0.942).