Table of Contents
Fetching ...

Adaptive Threshold-Driven Continuous Greedy Method for Scalable Submodular Optimization

Mohammadreza Rostami, Solmaz S. Kia

Abstract

Submodular maximization under matroid constraints is a fundamental problem in combinatorial optimization with applications in sensing, data summarization, active learning, and resource allocation. While the Sequential Greedy (SG) algorithm achieves only a $\frac{1}{2}$-approximation due to irrevocable selections, Continuous Greedy (CG) attains the optimal $\bigl(1-\frac{1}{e}\bigr)$-approximation via the multilinear relaxation, at the cost of a progressively dense decision vector that forces agents to exchange feature embeddings for nearly every ground-set element. We propose \textit{ATCG} (\underline{A}daptive \underline{T}hresholded \underline{C}ontinuous \underline{G}reedy), which gates gradient evaluations behind a per-partition progress ratio $η_i$, expanding each agent's active set only when current candidates fail to capture sufficient marginal gain, thereby directly bounding which feature embeddings are ever transmitted. Theoretical analysis establishes a curvature-aware approximation guarantee with effective factor $τ_{\mathrm{eff}}=\max\{τ,1-c\}$, interpolating between the threshold-based guarantee and the low-curvature regime where \textit{ATCG} recovers the performance of CG. Experiments on a class-balanced prototype selection problem over a subset of the CIFAR-10 animal dataset show that \textit{ATCG} achieves objective values comparable to those of the full CG method while substantially reducing communication overhead through adaptive active-set expansion.

Adaptive Threshold-Driven Continuous Greedy Method for Scalable Submodular Optimization

Abstract

Submodular maximization under matroid constraints is a fundamental problem in combinatorial optimization with applications in sensing, data summarization, active learning, and resource allocation. While the Sequential Greedy (SG) algorithm achieves only a -approximation due to irrevocable selections, Continuous Greedy (CG) attains the optimal -approximation via the multilinear relaxation, at the cost of a progressively dense decision vector that forces agents to exchange feature embeddings for nearly every ground-set element. We propose \textit{ATCG} (\underline{A}daptive \underline{T}hresholded \underline{C}ontinuous \underline{G}reedy), which gates gradient evaluations behind a per-partition progress ratio , expanding each agent's active set only when current candidates fail to capture sufficient marginal gain, thereby directly bounding which feature embeddings are ever transmitted. Theoretical analysis establishes a curvature-aware approximation guarantee with effective factor , interpolating between the threshold-based guarantee and the low-curvature regime where \textit{ATCG} recovers the performance of CG. Experiments on a class-balanced prototype selection problem over a subset of the CIFAR-10 animal dataset show that \textit{ATCG} achieves objective values comparable to those of the full CG method while substantially reducing communication overhead through adaptive active-set expansion.

Paper Structure

This paper contains 11 sections, 2 theorems, 40 equations, 5 figures, 2 algorithms.

Key Result

Theorem III.1

Consider the continuous-time version of ATCG for a monotone submodular function $f:2^{\mathcal{P}}\to\mathbb{R}_{+}$, and let $F$ denote its multilinear extension over the matroid polytope $\mathcal{M}$. By construction of ATCG, the active-set update rule ensures that, at each iteration and for ever for some $\tau\in(0,1]$ and all $t\in[0,1]$. Let $\mathbf{x}^\star\in\mathcal{M}$ be an optimal sol

Figures (5)

  • Figure 1: Illustration of the early-commitment effect of SG (left) and its mitigation via CG (right) in a 2D sensor placement task. Deployment points (solid circles) are selected from candidates (red $\times$) to cover a clustered data distribution (blue dots) under partition-matroid constraints, where each colored region represents the candidate set of the corresponding agent.
  • Figure 2: Inter-class mean RBF similarity matrix for the CIFAR animal subset, illustrating the coupling effect between the animal images.
  • Figure 3: Objective $f$ vs. iteration for CG and ATCG ($\tau{=}0.30$) on the CIFAR animal subset.
  • Figure 4: Cumulative bytes uploaded to the server for CG and ATCG.
  • Figure 5: Total active-set size $\sum_i|\mathcal{A}_i|$ vs. iteration for ATCG.

Theorems & Definitions (7)

  • Remark 1: Exact-gradient and continuous-time idealization
  • Theorem III.1: Performance guarantee of ATCG, Algorithm \ref{['alg:atcg']}
  • Remark 2: Comparison with classical CG
  • Theorem III.2: Curvature-aware guarantee for ATCG
  • Remark 3: Communication efficiency under low curvature
  • proof : Proof of Theorem \ref{['thm:atcg_tau']}
  • proof : Proof of Theorem \ref{['cor:atcg_curvature']}