Table of Contents
Fetching ...

A Fully Sparse Implementation of a Primal-Dual Interior-Point Potential Reduction Method for Semidefinite Programming

Gun Srijuntongsiri, Stephen A. Vavasis

TL;DR

This work presents a fully sparse solver for semidefinite programs by combining Fukuda et al.'s partial primal matrix approach with Nesterov–Nemirovskii's primal–dual potential reduction. The method uses a maximum‑determinant PD completion to preserve self‑concordance and employs reverse‑mode inspired differentiation together with conjugate gradients to compute gradient and Hessian–vector products in time comparable to dense barrier evaluations, avoiding dense primal variables. For planar sparsity patterns, the approach achieves per‑iteration time $O(n^{5/2})$ and space $O(n \log n)$, with empirical MAX‑CUT experiments validating the efficiency gains and showing benefit from using four search directions. These results enable scalable SDP solving for large, sparse, especially planar, problems common in graph optimization and related domains.

Abstract

In this paper, we show a way to exploit sparsity in the problem data in a primal-dual potential reduction method for solving a class of semidefinite programs. When the problem data is sparse, the dual variable is also sparse, but the primal one is not. To avoid working with the dense primal variable, we apply Fukuda et al.'s theory of partial matrix completion and work with partial matrices instead. The other place in the algorithm where sparsity should be exploited is in the computation of the search direction, where the gradient and the Hessian-matrix product of the primal and dual barrier functions must be computed in every iteration. By using an idea from automatic differentiation in backward mode, both the gradient and the Hessian-matrix product can be computed in time proportional to the time needed to compute the barrier functions of sparse variables itself. Moreover, the high space complexity that is normally associated with the use of automatic differentiation in backward mode can be avoided in this case. In addition, we suggest a technique to efficiently compute the determinant of the positive definite matrix completion that is required to compute primal search directions. The method of obtaining one of the primal search directions that minimizes the number of the evaluations of the determinant of the positive definite completion is also proposed. We then implement the algorithm and test it on the problem of finding the maximum cut of a graph.

A Fully Sparse Implementation of a Primal-Dual Interior-Point Potential Reduction Method for Semidefinite Programming

TL;DR

This work presents a fully sparse solver for semidefinite programs by combining Fukuda et al.'s partial primal matrix approach with Nesterov–Nemirovskii's primal–dual potential reduction. The method uses a maximum‑determinant PD completion to preserve self‑concordance and employs reverse‑mode inspired differentiation together with conjugate gradients to compute gradient and Hessian–vector products in time comparable to dense barrier evaluations, avoiding dense primal variables. For planar sparsity patterns, the approach achieves per‑iteration time and space , with empirical MAX‑CUT experiments validating the efficiency gains and showing benefit from using four search directions. These results enable scalable SDP solving for large, sparse, especially planar, problems common in graph optimization and related domains.

Abstract

In this paper, we show a way to exploit sparsity in the problem data in a primal-dual potential reduction method for solving a class of semidefinite programs. When the problem data is sparse, the dual variable is also sparse, but the primal one is not. To avoid working with the dense primal variable, we apply Fukuda et al.'s theory of partial matrix completion and work with partial matrices instead. The other place in the algorithm where sparsity should be exploited is in the computation of the search direction, where the gradient and the Hessian-matrix product of the primal and dual barrier functions must be computed in every iteration. By using an idea from automatic differentiation in backward mode, both the gradient and the Hessian-matrix product can be computed in time proportional to the time needed to compute the barrier functions of sparse variables itself. Moreover, the high space complexity that is normally associated with the use of automatic differentiation in backward mode can be avoided in this case. In addition, we suggest a technique to efficiently compute the determinant of the positive definite matrix completion that is required to compute primal search directions. The method of obtaining one of the primal search directions that minimizes the number of the evaluations of the determinant of the positive definite completion is also proposed. We then implement the algorithm and test it on the problem of finding the maximum cut of a graph.

Paper Structure

This paper contains 16 sections, 1 theorem, 32 equations, 2 figures, 2 tables.

Key Result

Theorem 4.1

Let $G'=(V,F)$ be a chordal graph. Any partial symmetric matrix $\bar{X} \in \mathcal{S}^n(F,?)$ satisfying the property that $\bar{X}_{C_r C_r}$ is symmetric positive definite for each $r=1,2,\ldots,l$, where $\{ C_r \subseteq V : r = 1,2,\ldots,l\}$ denote the family of maximal cliques of $G'$, ca

Figures (2)

  • Figure 1: Average CPU time to compute $\ln\det\hat{X}$ in the case that $\bar{X}$ is a banded matrix. Bandwidth is fixed to 3 while varying the number of vertices.
  • Figure 2: Average CPU time to compute $\ln\det\hat{X}$ in the case that $\bar{X}$ is a banded matrix. The quantity $n-p$ is fixed to 10 while varying the bandwidth.

Theorems & Definitions (1)

  • Theorem 4.1: Grone et al. grone