Beyond Convexity: Proximal-Perturbed Lagrangian Methods for Efficient Functional Constrained Optimization
Sang Bin Moon, Jong Gwang Kim, Ashish Chandra, Christopher Brinton, Abolfazl Hashemi
TL;DR
This paper develops a primal-dual algorithmic framework built upon a novel form of the Lagrangian function, termed the {\em Proximal-Perturbed Augmented Lagrangian}, which enables the development of simple first-order algorithms that converge to a stationary solution under mild conditions.
Abstract
Non-convex functional constrained optimization problems have gained substantial attention in machine learning and data science, addressing broad requirements that typically go beyond the often performance-centric objectives. An influential class of algorithms for functional constrained problems is the class of primal-dual methods which has been extensively analyzed for convex problems. Nonetheless, the investigation of their efficacy for non-convex problems is under-explored. This paper develops a primal-dual algorithmic framework for solving such non-convex problems. This framework is built upon a novel form of the Lagrangian function, termed the {\em Proximal-Perturbed Augmented Lagrangian}, which enables the development of simple first-order algorithms that converge to a stationary solution under mild conditions. Notably, we study this framework under both non-smoothness and smoothness of the constraint function and provide three key contributions: (i) a simple algorithm that does not require the continuous adjustment of the penalty parameter; (ii) a non-asymptotic iteration complexity of $\widetilde{\mathcal{O}}(1/ε^2)$; and (iii) extensive experimental results demonstrating the effectiveness of the proposed framework in terms of computational cost and performance, outperforming related approaches that use regularization (penalization) techniques and/or standard Lagrangian relaxation across diverse non-convex problems.
