Constraint Identification and Algorithm Stabilization for Degenerate Nonlinear Programs
Stephen J. Wright
TL;DR
This work tackles degeneracy in nonlinear programming where active constraint gradients may be dependent and strict complementarity can fail. It develops a constraint-identification approach that partitions the active set into ${\mathcal B}_0$ and ${\mathcal B}_+$ using a multiplier-based estimator $\eta(z,\lambda)$ and a sequence of linear programs to obtain robust multiplier estimates, then feeds this information into a multiplier-adjustment scheme within a stabilized SQP framework. The main result is local superlinear convergence under weakened conditions (no strict complementarity, no interior starting point, and no need for stronger second-order conditions) achieved by combining active-set identification with stabilization. The findings suggest improved robustness and convergence rates for degenerate NLP and can be integrated into global-convergence frameworks such as merit/filters or augmented Lagrangian methods, with potential extensions to interior-point and dual methods. These insights build on Hager’s stabilized SQP results and related active-set identification work, offering practical pathways for handling degeneracy in large-scale applications.
Abstract
In the vicinity of a solution of a nonlinear programming problem at which both strict complementarity and linear independence of the active constraints may fail to hold, we describe a technique for distinguishing weakly active from strongly active constraints. We show that this information can be used to modify the sequential quadratic programming algorithm so that it exhibits superlinear convergence to the solution under assumptions weaker than those made in previous analyses.
