Table of Contents
Fetching ...

TED: Training-Free Experience Distillation for Multimodal Reasoning

Shuozhi Yuan, Jinqing Wang, Zihao Liu, Miaomiao Yuan, Haoran Peng, Jin Zhao, Bingwen Wang, Haoyi Wang

Abstract

Knowledge distillation is typically realized by transferring a teacher model's knowledge into a student's parameters through supervised or reinforcement-based optimization. While effective, such approaches require repeated parameter updates and large-scale training data, limiting their applicability in resource-constrained environments. In this work, we propose TED, a training-free, context-based distillation framework that shifts the update target of distillation from model parameters to an in-context experience injected into the student's prompt. For each input, the student generates multiple reasoning trajectories, while a teacher independently produces its own solution. The teacher then compares the student trajectories with its reasoning and the ground-truth answer, extracting generalized experiences that capture effective reasoning patterns. These experiences are continuously refined and updated over time. A key challenge of context-based distillation is unbounded experience growth and noise accumulation. TED addresses this with an experience compression mechanism that tracks usage statistics and selectively merges, rewrites, or removes low-utility experiences. Experiments on multimodal reasoning benchmarks MathVision and VisualPuzzles show that TED consistently improves performance. On MathVision, TED raises the performance of Qwen3-VL-8B from 0.627 to 0.702, and on VisualPuzzles from 0.517 to 0.561 with just 100 training samples. Under this low-data, no-update setting, TED achieves performance competitive with fully trained parameter-based distillation while reducing training cost by over 5x, demonstrating that meaningful knowledge transfer can be achieved through contextual experience.

TED: Training-Free Experience Distillation for Multimodal Reasoning

Abstract

Knowledge distillation is typically realized by transferring a teacher model's knowledge into a student's parameters through supervised or reinforcement-based optimization. While effective, such approaches require repeated parameter updates and large-scale training data, limiting their applicability in resource-constrained environments. In this work, we propose TED, a training-free, context-based distillation framework that shifts the update target of distillation from model parameters to an in-context experience injected into the student's prompt. For each input, the student generates multiple reasoning trajectories, while a teacher independently produces its own solution. The teacher then compares the student trajectories with its reasoning and the ground-truth answer, extracting generalized experiences that capture effective reasoning patterns. These experiences are continuously refined and updated over time. A key challenge of context-based distillation is unbounded experience growth and noise accumulation. TED addresses this with an experience compression mechanism that tracks usage statistics and selectively merges, rewrites, or removes low-utility experiences. Experiments on multimodal reasoning benchmarks MathVision and VisualPuzzles show that TED consistently improves performance. On MathVision, TED raises the performance of Qwen3-VL-8B from 0.627 to 0.702, and on VisualPuzzles from 0.517 to 0.561 with just 100 training samples. Under this low-data, no-update setting, TED achieves performance competitive with fully trained parameter-based distillation while reducing training cost by over 5x, demonstrating that meaningful knowledge transfer can be achieved through contextual experience.

Paper Structure

This paper contains 43 sections, 19 equations, 4 figures, 8 tables, 3 algorithms.

Figures (4)

  • Figure 1: TED reformulates knowledge distillation from parameter updates to contextual experience reuse.
  • Figure 2: Overview of TED. Our proposed method includes three stages: trajectory generation, experience generation, and experience compression. The student first samples multiple reasoning trajectories, which the teacher critiques against its own reasoning and the ground truth to distill generalized experiences. These experiences are then compressed and injected into the system prompt for iterative, parameter-free improvement.
  • Figure 3: Overview of the Experience Compression module in TED. When the experience exceeds the context budget, TED estimates each experience’s utility and tracks its usage frequency. The teacher then compresses the experience by merging, rewriting, deleting, or retaining experiences, producing a compact system prompt that preserves high-utility knowledge for efficient, parameter-free iterative improvement.
  • Figure 4: Hyperparameter ablation of TED on MathVision. Performance is affected by the number of experience items, the number of sampled trajectories, and the choice of teacher model. TED performs best with a moderate experience size, more diverse trajectories, and stronger teachers.