Table of Contents
Fetching ...

Demystifying When Pruning Works via Representation Hierarchies

Shwai He, Guoheng Sun, Haichao Zhang, Yun Fu, Ang Li

Abstract

Network pruning, which removes less important parameters or architectures, is often expected to improve efficiency while preserving performance. However, this expectation does not consistently hold across language tasks: pruned models can perform well on non-generative tasks but frequently fail in generative settings. To understand this discrepancy, we analyze network pruning from a representation-hierarchy perspective, decomposing the internal computation of language models into three sequential spaces: embedding (hidden representations), logit (pre-softmax outputs), and probability (post-softmax distributions). We find that representations in the embedding and logit spaces are largely robust to pruning-induced perturbations. However, the nonlinear transformation from logits to probabilities amplifies these deviations, which accumulate across time steps and lead to substantial degradation during generation. In contrast, the stability of the categorical-token probability subspace, together with the robustness of the embedding space, supports the effectiveness of pruning for non-generative tasks such as retrieval and multiple-choice selection. Our analysis disentangles the effects of pruning across tasks and provides practical guidance for its application. Code is available at https://github.com/CASE-Lab-UMD/Pruning-on-Representations

Demystifying When Pruning Works via Representation Hierarchies

Abstract

Network pruning, which removes less important parameters or architectures, is often expected to improve efficiency while preserving performance. However, this expectation does not consistently hold across language tasks: pruned models can perform well on non-generative tasks but frequently fail in generative settings. To understand this discrepancy, we analyze network pruning from a representation-hierarchy perspective, decomposing the internal computation of language models into three sequential spaces: embedding (hidden representations), logit (pre-softmax outputs), and probability (post-softmax distributions). We find that representations in the embedding and logit spaces are largely robust to pruning-induced perturbations. However, the nonlinear transformation from logits to probabilities amplifies these deviations, which accumulate across time steps and lead to substantial degradation during generation. In contrast, the stability of the categorical-token probability subspace, together with the robustness of the embedding space, supports the effectiveness of pruning for non-generative tasks such as retrieval and multiple-choice selection. Our analysis disentangles the effects of pruning across tasks and provides practical guidance for its application. Code is available at https://github.com/CASE-Lab-UMD/Pruning-on-Representations

Paper Structure

This paper contains 34 sections, 72 equations, 21 figures, 3 tables.

Figures (21)

  • Figure 1: Effect of inter-layer pruning on generative and non-generative tasks. Inter-layer pruning is implemented by removing entire transformer blocks (ShortGPT men2024shortgptlayerslargelanguage) or attention/MLP layers (Attn/MLP Drop he2026uncovering).
  • Figure 2: Propagation of pruning-induced perturbations across representation spaces in LLMs. Small embedding perturbations $\Delta h$ introduced by pruning remain stable in the logit spaces (i.e, small $\Delta z$), but are amplified by the softmax nonlinearity in the high-dimensional probability space, resulting in large probability shifts $\Delta p$ and degraded autoregressive generation.
  • Figure 3: Impact of intra-layer pruning on non-generative and generative tasks, i.e., HellaSwag zellers2019hellaswag and GSM8K cobbe2021gsm8k. Both unstructured pruning and semi-structured sparsity patterns ($4{:}8$ and $2{:}4$zhou2021) are considered.
  • Figure 4: Representation similarity between the input and output of each layer, with mean values shown as curves and the corresponding min–max ranges visualized as shaded areas.
  • Figure 5: Relative orthogonal magnitude in the embedding space ($h$) and the logit space ($z$).
  • ...and 16 more figures