Table of Contents
Fetching ...

Implicit Primal-Dual Interior-Point Methods for Quadratic Programming

Jon Arrizabalaga, Zachary Manchester

Abstract

This paper introduces a new method for solving quadratic programs using primal-dual interior-point methods. Instead of handling complementarity as an explicit equation in the Karush-Kuhn-Tucker (KKT) conditions, we ensure that complementarity is implicitly satisfied by construction. This is achieved by introducing an auxiliary variable and relating it to the duals and slacks via a retraction map. Specifically, we prove that the softplus function has favorable numerical properties compared to the commonly used exponential map. The resulting KKT system is guaranteed to be spectrally bounded, thereby eliminating the most pressing limitation of primal-dual methods: ill-conditioning near the solution. These attributes facilitate the solution of the underlying linear system, either by removing the need to compute factorizations at every iteration, enabling factorization-free approaches like indirect solvers, or allowing the solver to achieve high accuracy in low-precision arithmetic. Consequently, this novel perspective opens new opportunities for interior-point methods, especially for solving large-scale problems to high precision.

Implicit Primal-Dual Interior-Point Methods for Quadratic Programming

Abstract

This paper introduces a new method for solving quadratic programs using primal-dual interior-point methods. Instead of handling complementarity as an explicit equation in the Karush-Kuhn-Tucker (KKT) conditions, we ensure that complementarity is implicitly satisfied by construction. This is achieved by introducing an auxiliary variable and relating it to the duals and slacks via a retraction map. Specifically, we prove that the softplus function has favorable numerical properties compared to the commonly used exponential map. The resulting KKT system is guaranteed to be spectrally bounded, thereby eliminating the most pressing limitation of primal-dual methods: ill-conditioning near the solution. These attributes facilitate the solution of the underlying linear system, either by removing the need to compute factorizations at every iteration, enabling factorization-free approaches like indirect solvers, or allowing the solver to achieve high accuracy in low-precision arithmetic. Consequently, this novel perspective opens new opportunities for interior-point methods, especially for solving large-scale problems to high precision.

Paper Structure

This paper contains 17 sections, 2 theorems, 34 equations, 6 figures.

Key Result

Theorem 1

Let $b_\mu : \mathbb{R} \rightarrow \mathbb{R}_+$ be a retraction map satisfying the properties in Definition def:retraction_map. Then, for any $\mu > 0$, the unique function that fulfills these conditions is the softplus function, given by $\blacktriangleleft$$\blacktriangleleft$

Figures (6)

  • Figure 3: We present an implicit method for primal-dual interior-point optimization. Unlike the standard approach that enforces complementarity as an explicit algebraic constraint (red), our method satisfies it by construction (orange) using an auxiliary variable $v$ and a softplus retraction map $b_\mu(v)$ (green, blue). This structural shift guarantees a spectrally bounded KKT system, even in the vicinity of a solution. By eliminating the eigenvalue divergence of traditional methods, our approach enables reusing matrix factorizations, adopting factorization-free indirect solvers, and reaching high accuracy with low-precision arithmetic.
  • Figure 4: The softplus retraction map (left) and its derivative (right) for varying values of the barrier parameter $\mu$. As proven in Theorem \ref{['thm:unique_retraction_map']}, the softplus is specifically chosen because it is the unique retraction map that fulfills the criteria outlined in Definition \ref{['def:retraction_map']}. Notably, its derivative is strictly bounded between 0 and 1 (right). Such boundedness guarantees that the eigenvalue spectrum of the linear system stays within a compact set as the solution is approached. See Fig. \ref{['fig:explanation']} for a spatial illustration of how two softplus retraction maps implicitly enforce complementarity.
  • Figure 5: Fundamental numerical properties of the implicit representation evaluated on the synthetic toy problem. Column 1: Evolution of the primal iterates $x$ within the optimization landscape for our implicit approach (red circles) and the optimal solution (magenta). The bottom plot shows the corresponding dual-slack variables $(\lambda, s)$ for the four constraints. Columns 2--4, Top row: Eigenvalue spectrum, condition number, and iteration-to-iteration KKT matrix changes. Here, we compare the standard explicit matrix (blue), which diverges and experiences drastic changes near the solution, against our implicit matrix (red), which remains strictly bounded and smoothly varying. Columns 2--4, Bottom row: Evolution of the retraction map's derivative across iterations, illustrating how the auxiliary variables (color-coded by constraint) smoothly settle into their respective active or inactive states.
  • Figure 6: Evaluation of inexact Newton steps, reducing required factorizations by freezing the KKT matrix. Column 1: Synthetic problem iterates (top) and a zoom-in on the final steps (bottom). Red markers highlight explicitly computed factorizations (only 4 of 18 iterations), while the red, green, and yellow regions indicate the validity of the frozen factorizations at those steps. Columns 2--4: Performance on the Maros-Meszaros dataset maros1999repository. Top row: Duality gap reduction over time for different forcing factors $\theta$. Bottom row: Total number of matrix factorizations versus $\theta$. Higher forcing factors enable more aggressive matrix freezing, significantly accelerating overall convergence.
  • Figure 7: Evaluation of the MINRES iterative solver on the Maros-Meszaros dataset maros1999repository for the explicit (blue) and implicit (red) formulations. Top row: Eigenvalue spectrum across interior-point iterations. Middle row: Number of inner Krylov (MINRES) iterations required per interior-point step. Bottom row: Duality gap reduction versus total solve time. The implicit method's spectral boundedness prevents the iteration explosion and severe slowdowns observed in the explicit method.
  • ...and 1 more figures

Theorems & Definitions (5)

  • Definition 1: Retraction map
  • Theorem 1
  • proof
  • Theorem 2
  • proof