Table of Contents
Fetching ...

Multi-Aspect Knowledge Distillation for Language Model with Low-rank Factorization

Zihe Liu, Yulong Mao, Jinan Xu, Xinrui Peng, Kaiyu Huang

Abstract

Knowledge distillation is an effective technique for pre-trained language model compression. However, existing methods only focus on the knowledge distribution among layers, which may cause the loss of fine-grained information in the alignment process. To address this issue, we introduce the Multi-aspect Knowledge Distillation (MaKD) method, which mimics the self-attention and feed-forward modules in greater depth to capture rich language knowledge information at different aspects. Experimental results demonstrate that MaKD can achieve competitive performance compared with various strong baselines with the same storage parameter budget. In addition, our method also performs well in distilling auto-regressive architecture models.

Multi-Aspect Knowledge Distillation for Language Model with Low-rank Factorization

Abstract

Knowledge distillation is an effective technique for pre-trained language model compression. However, existing methods only focus on the knowledge distribution among layers, which may cause the loss of fine-grained information in the alignment process. To address this issue, we introduce the Multi-aspect Knowledge Distillation (MaKD) method, which mimics the self-attention and feed-forward modules in greater depth to capture rich language knowledge information at different aspects. Experimental results demonstrate that MaKD can achieve competitive performance compared with various strong baselines with the same storage parameter budget. In addition, our method also performs well in distilling auto-regressive architecture models.

Paper Structure

This paper contains 28 sections, 19 equations, 2 figures, 5 tables.

Figures (2)

  • Figure 1: A comparison of logits-based (a), feature-based (b) and our multi-aspect (c) learning method.
  • Figure 2: Overview of multi-aspect knowledge distillation. We obtain student model with the same hidden dimensions as the teacher model by low-rank matrix decomposition. We introduce matrix distillation computed by intra-layer linear, which consists of MHA mappings (queries, keys and values) and FNN vectors (up projection and down projection). We choose multi-aspect hierarchical distillation to achieve a balance between performance and training speed.