Table of Contents
Fetching ...

Why Does Self-Distillation (Sometimes) Degrade the Reasoning Capability of LLMs?

Jeonghye Kim, Xufang Luo, Minbeom Kim, Sangmook Lee, Dohyung Kim, Jiwon Jeon, Dongsheng Li, Yuqing Yang

Abstract

Self-distillation has emerged as an effective post-training paradigm for LLMs, often improving performance while shortening reasoning traces. However, in mathematical reasoning, we find that it can reduce response length while degrading performance. We trace this degradation to the suppression of epistemic verbalization - the model's expression of uncertainty during reasoning. Through controlled experiments varying conditioning context richness and task coverage, we show that conditioning the teacher on rich information suppresses uncertainty expression, enabling rapid in-domain optimization with limited task coverage but harming OOD performance, where unseen problems benefit from expressing uncertainty and adjusting accordingly. Across Qwen3-8B, DeepSeek-Distill-Qwen-7B, and Olmo3-7B-Instruct, we observe performance drops of up to 40%. Our findings highlight that exposing appropriate levels of uncertainty is crucial for robust reasoning and underscore the importance of optimizing reasoning behavior beyond merely reinforcing correct answer traces.

Why Does Self-Distillation (Sometimes) Degrade the Reasoning Capability of LLMs?

Abstract

Self-distillation has emerged as an effective post-training paradigm for LLMs, often improving performance while shortening reasoning traces. However, in mathematical reasoning, we find that it can reduce response length while degrading performance. We trace this degradation to the suppression of epistemic verbalization - the model's expression of uncertainty during reasoning. Through controlled experiments varying conditioning context richness and task coverage, we show that conditioning the teacher on rich information suppresses uncertainty expression, enabling rapid in-domain optimization with limited task coverage but harming OOD performance, where unseen problems benefit from expressing uncertainty and adjusting accordingly. Across Qwen3-8B, DeepSeek-Distill-Qwen-7B, and Olmo3-7B-Instruct, we observe performance drops of up to 40%. Our findings highlight that exposing appropriate levels of uncertainty is crucial for robust reasoning and underscore the importance of optimizing reasoning behavior beyond merely reinforcing correct answer traces.

Paper Structure

This paper contains 41 sections, 5 equations, 16 figures, 7 tables.

Figures (16)

  • Figure 1: (a) Training score and response length changes for GRPO and Self-Distillation (SDPO) SDPO in Chemistry, using results from SDPO Wandb logs wandbhttps://wandb.ai/jonhue/SDPO?nw=mgotcx6kk7. (b) Training score and response length changes on DAPO-Math-17k with GRPO and SDPO.
  • Figure 2: On-policy self-distillation results for DeepSeek-R1-Distill-Qwen-7B. GRPO yields modest OOD gains with a slight increase in epistemic verbalization, whereas SDPO degrades both performance and epistemic token usage, particularly with $c = s$.
  • Figure 3: On-policy self-distillation results for Qwen3-8B (Thinking Mode: ON). Both GRPO and SDPO reduce response length and epistemic verbalization, but SDPO's more aggressive suppression leads to greater OOD performance degradation, particularly on AIME24.
  • Figure 4: On-policy self-distillation results for Qwen3-8B (Thinking Mode: OFF). GRPO rapidly increases response length via epistemic verbalization and achieves strong training gains, while SDPO reduces response length and struggles to improve, with slight OOD degradation on AIME24.
  • Figure 5: Fixed vs. moving target teacher for DeepSeek-R1-Distill-Qwen-7B. Even slow EMA updates (rate 0.05) amplify epistemic suppression via a feedback loop, causing greater performance degradation than a fixed teacher.
  • ...and 11 more figures