Table of Contents
Fetching ...

Reliability Gated Multi-Teacher Distillation for Low Resource Abstractive Summarization

Dipto Sumit, Ankan Kumar Roy, Sadia Khair Rodela, Atia Haque Asha, Mourchona Afrin, Niloy Farhan, Farig Yousuf Sadeque

Abstract

We study multiteacher knowledge distillation for low resource abstractive summarization from a reliability aware perspective. We introduce EWAD (Entropy Weighted Agreement Aware Distillation), a token level mechanism that routes supervision between teacher distillation and gold supervision based on inter teacher agreement, and CPDP (Capacity Proportional Divergence Preservation), a geometric constraint on the student position relative to heterogeneous teachers. Across two Bangla datasets, 13 BanglaT5 ablations, and eight Qwen2.5 experiments, we find that logit level KD provides the most reliable gains, while more complex distillation improves semantic similarity for short summaries but degrades longer outputs. Cross lingual pseudo label KD across ten languages retains 71-122 percent of teacher ROUGE L at 3.2x compression. A human validated multi judge LLM evaluation further reveals calibration bias in single judge pipelines. Overall, our results show that reliability aware distillation helps characterize when multi teacher supervision improves summarization and when data scaling outweighs loss engineering.

Reliability Gated Multi-Teacher Distillation for Low Resource Abstractive Summarization

Abstract

We study multiteacher knowledge distillation for low resource abstractive summarization from a reliability aware perspective. We introduce EWAD (Entropy Weighted Agreement Aware Distillation), a token level mechanism that routes supervision between teacher distillation and gold supervision based on inter teacher agreement, and CPDP (Capacity Proportional Divergence Preservation), a geometric constraint on the student position relative to heterogeneous teachers. Across two Bangla datasets, 13 BanglaT5 ablations, and eight Qwen2.5 experiments, we find that logit level KD provides the most reliable gains, while more complex distillation improves semantic similarity for short summaries but degrades longer outputs. Cross lingual pseudo label KD across ten languages retains 71-122 percent of teacher ROUGE L at 3.2x compression. A human validated multi judge LLM evaluation further reveals calibration bias in single judge pipelines. Overall, our results show that reliability aware distillation helps characterize when multi teacher supervision improves summarization and when data scaling outweighs loss engineering.

Paper Structure

This paper contains 47 sections, 17 equations, 3 figures, 8 tables.

Figures (3)

  • Figure 1: End-to-end framework. Documents are length-routed to the multi-teacher KD branch or MapReduce module. Three teachers provide logit and pseudo-label supervision across five ablation stages.
  • Figure 2: Standard distillation loss (Eq. \ref{['eq:base-loss']}): $\mathcal{L}_{\text{KD}}$ (softened KL), $\mathcal{L}_{\text{inter}}$ (projected MSE), and $\mathcal{L}_{\text{CE}}$ (gold cross-entropy).
  • Figure 3: Dual-teacher Ewad+Cpdp with Qwen-2.5 (32B + 14B $\to$ 3B + LoRA). Eight ablation experiments isolate each component.