Table of Contents
Fetching ...

REAM: Merging Improves Pruning of Experts in LLMs

Saurav Jha, Maryam Hashemzadeh, Ali Saheb Pasand, Ali Parviz, Min-Joong Lee, Boris Knyazev

Abstract

Mixture-of-Experts (MoE) large language models (LLMs) are among the top-performing architectures. The largest models, often with hundreds of billions of parameters, pose significant memory challenges for deployment. Traditional approaches to reduce memory requirements include weight pruning and quantization. Motivated by the Router-weighted Expert Activation Pruning (REAP) that prunes experts, we propose a novel method, Router-weighted Expert Activation Merging (REAM). Instead of removing experts, REAM groups them and merges their weights, better preserving original performance. We evaluate REAM against REAP and other baselines across multiple MoE LLMs on diverse multiple-choice (MC) question answering and generative (GEN) benchmarks. Our results reveal a trade-off between MC and GEN performance that depends on the mix of calibration data. By controlling the mix of general, math and coding data, we examine the Pareto frontier of this trade-off and show that REAM often outperforms the baselines and in many cases is comparable to the original uncompressed models.

REAM: Merging Improves Pruning of Experts in LLMs

Abstract

Mixture-of-Experts (MoE) large language models (LLMs) are among the top-performing architectures. The largest models, often with hundreds of billions of parameters, pose significant memory challenges for deployment. Traditional approaches to reduce memory requirements include weight pruning and quantization. Motivated by the Router-weighted Expert Activation Pruning (REAP) that prunes experts, we propose a novel method, Router-weighted Expert Activation Merging (REAM). Instead of removing experts, REAM groups them and merges their weights, better preserving original performance. We evaluate REAM against REAP and other baselines across multiple MoE LLMs on diverse multiple-choice (MC) question answering and generative (GEN) benchmarks. Our results reveal a trade-off between MC and GEN performance that depends on the mix of calibration data. By controlling the mix of general, math and coding data, we examine the Pareto frontier of this trade-off and show that REAM often outperforms the baselines and in many cases is comparable to the original uncompressed models.

Paper Structure

This paper contains 36 sections, 8 equations, 7 figures, 6 tables.

Figures (7)

  • Figure 1: Illustration of REAM components: a) Comparison of expert compression strategies reducing $N{=}9$ experts to $N'{=}4$. HC-SMoE merging chen2025retrainingfree clusters all experts by output similarity regardless of saliency (e.g., E1 and E7 grouped together). Pruning retains the top-4 salient experts unchanged and discards the rest. Our REAM's pseudo-pruning selects the top-4 experts as protected centroids and absorbs remaining experts into their nearest centroid via saliency-weighted merging, leaving other groups as singletons. b) Compared to baseline pruning and merging methods ① that collect the activations from the original uncompressed model for all layers at once, REAM ② recomputes the per-layer activations after merging each MoE layer before processing the next layer.
  • Figure 2: Discriminative (MC) vs. Generative (GEN) trade-off depending on the calibration data mixture: benchmark scores with 64 (left) and 96 (right) experts for REAP, HC-SMoE, and REAM across ten mixing ratios of the calibration data with Qwen3-30B-A3B-Instruct-2507. The marker sizes are proportional to the The-Stack-Smol share of the mixture.
  • Figure 3: Additional analyses for 96 experts: a) Pearson correlation $r$ between calibration datasets (C4, Math, Code) and MC/GEN scores, and between MC and GEN scores themselves, for each merging method. b) Pareto frontiers where each point is one of 10 calibration mixtures. Filled markers denote Pareto-optimal configurations not simultaneously dominated on MC and GEN by any other mixture of the same method, and hollow markers denote dominated ones. The hypervolume (HV) measures the area of the MC$\times$GEN plane dominated by each method's frontier relative to a shared reference point, quantifying its overall performance ceiling. Per-method offsets are applied for better visibility.
  • Figure 4: Ablation of REAM components with 96 experts: (a)MC and GEN scores for each ablation variant; (b) Per-task score drop ($\Delta$) relative to the full REAM performance.
  • Figure 5: Correlation between avg. pre-logit ranks and AVG benchmark scores across 10 calibration ratios for 96 experts.
  • ...and 2 more figures