Table of Contents
Fetching ...

Multimodal Dataset Distillation via Phased Teacher Models

Shengbin Guo, Hang Zhao, Senqiao Yang, Chenyang Jiang, Yuhang Cheng, Xiangru Peng, Rui Shao, Zhuotao Tian

Abstract

Multimodal dataset distillation aims to construct compact synthetic datasets that enable efficient compression and knowledge transfer from large-scale image-text data. However, existing approaches often fail to capture the complex, dynamically evolving knowledge embedded in the later training stages of teacher models. This limitation leads to degraded student performance and compromises the quality of the distilled data. To address critical challenges such as pronounced cross-stage performance gaps and unstable teacher trajectories, we propose Phased Teacher Model with Shortcut Trajectory (PTM-ST) -- a novel phased distillation framework. PTM-ST leverages stage-aware teacher modeling and a shortcut-based trajectory construction strategy to accurately fit the teacher's learning dynamics across distinct training phases. This enhances both the stability and expressiveness of the distillation process. Through theoretical analysis and comprehensive experiments, we show that PTM-ST significantly mitigates optimization oscillations and inter-phase knowledge gaps, while also reducing storage overhead. Our method consistently surpasses state-of-the-art baselines on Flickr30k and COCO, achieving up to 13.5% absolute improvement and an average gain of 9.53% on Flickr30k. Code: https://github.com/Previsior/PTM-ST.

Multimodal Dataset Distillation via Phased Teacher Models

Abstract

Multimodal dataset distillation aims to construct compact synthetic datasets that enable efficient compression and knowledge transfer from large-scale image-text data. However, existing approaches often fail to capture the complex, dynamically evolving knowledge embedded in the later training stages of teacher models. This limitation leads to degraded student performance and compromises the quality of the distilled data. To address critical challenges such as pronounced cross-stage performance gaps and unstable teacher trajectories, we propose Phased Teacher Model with Shortcut Trajectory (PTM-ST) -- a novel phased distillation framework. PTM-ST leverages stage-aware teacher modeling and a shortcut-based trajectory construction strategy to accurately fit the teacher's learning dynamics across distinct training phases. This enhances both the stability and expressiveness of the distillation process. Through theoretical analysis and comprehensive experiments, we show that PTM-ST significantly mitigates optimization oscillations and inter-phase knowledge gaps, while also reducing storage overhead. Our method consistently surpasses state-of-the-art baselines on Flickr30k and COCO, achieving up to 13.5% absolute improvement and an average gain of 9.53% on Flickr30k. Code: https://github.com/Previsior/PTM-ST.

Paper Structure

This paper contains 58 sections, 2 theorems, 28 equations, 8 figures, 19 tables.

Key Result

Proposition 1

Let $\ell(\tilde{\mathcal{D}},\theta)$ denote the comparative learning loss on the distillation dataset $\tilde{\mathcal{D}}$ when the model parameter is $\theta$. Let $\mathcal{L}_1(\tilde{\mathcal{D}})$, $\mathcal{L}_2(\tilde{\mathcal{D}})$ denote the matching loss at two different ranges on the i Then the following conclusion holds:

Figures (8)

  • Figure 1: PTM-ST has achieved far superior performance than other selection or distillation methods in different metrics on Flickr30k.
  • Figure 2: (a) shows that using a late-stage, high-performing teacher during distillation leads to a decline in student performance. (b) shows the directions of different matching range gradients (after PCA dimensionality reduction).
  • Figure 3: (a) shows the conventional single-stage training with a fixed teacher model and uniform data use. In contrast, (b) depicts our Phased Teacher Model (PTM), which employs different teacher models across multiple training stages to distill knowledge to specific data subsets. (c) illustrates the aggregation of all distilled subsets for final student evaluation. Additionally, (d) presents our Shortcut Trajectory (ST) strategy that dynamically generates stage-adaptive teacher models, improving distillation effectiveness and robustness.
  • Figure 4: Gradient cosine similarity on the synthetic dataset across different epochs for the original and shortcut trajectories.
  • Figure 5: Examples of initial (left) and synthetic (right) image-text pairs.
  • ...and 3 more figures

Theorems & Definitions (3)

  • Proposition 1
  • Proposition 2
  • proof