Table of Contents
Fetching ...

Amped: Adaptive Multi-stage Non-edge Pruning for Edge Detection

Yuhan Gao, Xinqing Li, Xin He, Bing Li, Xinzhong Zhu, Ming-Ming Cheng, Yun Liu

Abstract

Edge detection is a fundamental image analysis task that underpins numerous high-level vision applications. Recent advances in Transformer architectures have significantly improved edge quality by capturing long-range dependencies, but this often comes with computational overhead. Achieving higher pixel-level accuracy requires increased input resolution, further escalating computational cost and limiting practical deployment. Building on the strong representational capacity of recent Transformer-based edge detectors, we propose an Adaptive Multi-stage non-edge Pruning framework for Edge Detection(Amped). Amped identifies high-confidence non-edge tokens and removes them as early as possible to substantially reduce computation, thus retaining high accuracy while cutting GFLOPs and accelerating inference with minimal performance loss. Moreover, to mitigate the structural complexity of existing edge detection networks and facilitate their integration into real-world systems, we introduce a simple yet high-performance Transformer-based model, termed Streamline Edge Detector(SED). Applied to both existing detectors and our SED, the proposed pruning strategy provides a favorable balance between accuracy and efficiency-reducing GFLOPs by up to 40% with only a 0.4% drop in ODS F-measure. In addition, despite its simplicity, SED achieves a state-of-the-art ODS F-measure of 86.5%. The code will be released.

Amped: Adaptive Multi-stage Non-edge Pruning for Edge Detection

Abstract

Edge detection is a fundamental image analysis task that underpins numerous high-level vision applications. Recent advances in Transformer architectures have significantly improved edge quality by capturing long-range dependencies, but this often comes with computational overhead. Achieving higher pixel-level accuracy requires increased input resolution, further escalating computational cost and limiting practical deployment. Building on the strong representational capacity of recent Transformer-based edge detectors, we propose an Adaptive Multi-stage non-edge Pruning framework for Edge Detection(Amped). Amped identifies high-confidence non-edge tokens and removes them as early as possible to substantially reduce computation, thus retaining high accuracy while cutting GFLOPs and accelerating inference with minimal performance loss. Moreover, to mitigate the structural complexity of existing edge detection networks and facilitate their integration into real-world systems, we introduce a simple yet high-performance Transformer-based model, termed Streamline Edge Detector(SED). Applied to both existing detectors and our SED, the proposed pruning strategy provides a favorable balance between accuracy and efficiency-reducing GFLOPs by up to 40% with only a 0.4% drop in ODS F-measure. In addition, despite its simplicity, SED achieves a state-of-the-art ODS F-measure of 86.5%. The code will be released.

Paper Structure

This paper contains 21 sections, 11 equations, 5 figures, 5 tables.

Figures (5)

  • Figure 1: Non-edge pruning flowchart. As the backbone network extracts features, our method adaptively generates a binary decision mask by thresholding the intermediate edge score maps to determine which tokens should be pruned.
  • Figure 2: Overview of the proposed Amped framework. Amped trims high-confidence non-edge tokens by computing the edge score map and comparing it against a stage-specific threshold. The proposed SED utilizes a simple linear decoder, a design that effectively reduces model complexity while maintaining high precision. $\mathbf{Z}^{(l)}$ and $\mathbf{M}^{(l)}$ denote the feature map and binary decision mask of pruning stage $l$, and $\tilde{\mathbf{Z}}^{(l)}$ represents the recovered feature map of stage $l$.
  • Figure 3: Qualitative comparisons between our method and baselines on three challenging samples from the BSDS500 test set arbelaez2010contour.
  • Figure 4: Precision-recall curves on the BSDS500 test set arbelaez2010contour.
  • Figure 5: Visualization of progressive non-edge pruning for SED-SViT (top) and SED-ViT (bottom). Most pruned tokens lie in smooth background regions, while tokens near object boundaries are largely preserved, especially for SED-SViT.