Table of Contents
Fetching ...

A Hierarchical Importance-Guided Multi-objective Evolutionary Framework for Deep Neural Network Pruning

Zak Khan, Azam Asilian Bidgoli

Abstract

The optimization of over-parameterized deep neural networks represents a large-scale, high-dimensional, and strongly non-convex decision problem that challenges existing optimization frameworks. Current evolutionary and gradient-based pruning methods often struggle to scale to such dimensionalities, as they rely on flat search spaces, scalarized objectives, or repeated retraining, leading to premature convergence and prohibitive computational cost. This paper introduces a hierarchical importance-guided evolutionary framework that reformulates convolutional network pruning as a tractable large-scale multi-objective optimization problem. In the first phase, a continuous evolutionary search performs coarse exploration of weight-wise pruning thresholds to shrink the search space and identify promising regions of the Pareto set. The second phase applies a fine-grained binary evolutionary optimization constrained to the surviving weights, where importance-aware sampling and adaptive variation operators refine local search in the sparse region of the Pareto set. This hierarchical design combines global exploration and localized exploitation to achieve a well-distributed Pareto set of networks balancing compactness and accuracy. Empirical results on CIFAR-10 and CIFAR-100 using ResNet-56 and ResNet-110 confirm the method's effectiveness compared to existing evolutionary approaches: pruning achieves up to 51.9\% and 38.9\% parameter reductions with almost no accuracy loss compared to state-of-the-art evolutionary DNN pruning methods. The proposed method contributes a scalable evolutionary approach for solving very-large-scale multi-objective optimization problems, offering a general paradigm extendable to other domains where the decision space is exponentially large, objective functions are conflicting, and efficient trade-off discovery is essential.

A Hierarchical Importance-Guided Multi-objective Evolutionary Framework for Deep Neural Network Pruning

Abstract

The optimization of over-parameterized deep neural networks represents a large-scale, high-dimensional, and strongly non-convex decision problem that challenges existing optimization frameworks. Current evolutionary and gradient-based pruning methods often struggle to scale to such dimensionalities, as they rely on flat search spaces, scalarized objectives, or repeated retraining, leading to premature convergence and prohibitive computational cost. This paper introduces a hierarchical importance-guided evolutionary framework that reformulates convolutional network pruning as a tractable large-scale multi-objective optimization problem. In the first phase, a continuous evolutionary search performs coarse exploration of weight-wise pruning thresholds to shrink the search space and identify promising regions of the Pareto set. The second phase applies a fine-grained binary evolutionary optimization constrained to the surviving weights, where importance-aware sampling and adaptive variation operators refine local search in the sparse region of the Pareto set. This hierarchical design combines global exploration and localized exploitation to achieve a well-distributed Pareto set of networks balancing compactness and accuracy. Empirical results on CIFAR-10 and CIFAR-100 using ResNet-56 and ResNet-110 confirm the method's effectiveness compared to existing evolutionary approaches: pruning achieves up to 51.9\% and 38.9\% parameter reductions with almost no accuracy loss compared to state-of-the-art evolutionary DNN pruning methods. The proposed method contributes a scalable evolutionary approach for solving very-large-scale multi-objective optimization problems, offering a general paradigm extendable to other domains where the decision space is exponentially large, objective functions are conflicting, and efficient trade-off discovery is essential.

Paper Structure

This paper contains 22 sections, 12 equations, 3 figures, 10 tables, 3 algorithms.

Figures (3)

  • Figure 1: Two-Phase Evolutionary Pruning Flow Diagram. Phase 1 performs coarse-grained evolutionary pruning in continuous space, while Phase 2 refines the sparse region using binary importance-guided search to produce the final Pareto-optimal models.
  • Figure 2: Overview of the two-phase pruning framework. Phase 1 constructs the Pareto front $\mathcal{P}_1$ via coarse threshold pruning, while Phase 2 performs localized binary refinement between heavy and light anchor models to produce the refined front $\mathcal{P}_2$.
  • Figure 3: Pareto fronts for all ResNet architectures on CIFAR-10 (a) and CIFAR-100 (b). In each dataset, the left column shows Phase 1 solutions, while the right column shows Phase 2 solutions generated by the multi-objective evolutionary search (NSGA-II / MOEA/D). Each plot illustrates the accuracy–sparsity trade-off, highlighting how Phase 2 expands and refines the Pareto front beyond Phase 1.