Table of Contents
Fetching ...

Low-Complexity Algorithm for Stackelberg Prediction Games with Global Optimality

Tong Wei, Yangjie Xu, Xinlin Wang, Pin-Han Ho, Bhavani Shankar M. R., Radu State, Björn Ottersten

Abstract

Stackelberg prediction games (SPGs) model strategic data manipulation in adversarial learning via a leader--follower interaction between a learner and a self-interested data provider, leading to challenging bilevel optimization problems. Focusing on the least-squares setting (SPG-LS), recent work shows that the bilevel program admits an equivalent spherically constrained least-squares (SCLS) reformulation, which avoids costly conic programming and enables scalable algorithms. In this paper, we develop a simple and efficient alternating direction method of multiplier (ADMM) based solver for the SCLS problem. By introducing a consensus splitting that separates the quadratic objective from the spherical constraint, we obtain an augmented Lagrangian formulation with closed-form updates: the primal quadratic step reduces to solving a fixed shifted linear system, the constraint step is a projection onto the unit sphere, and the dual step is a lightweight scaled ascent. The resulting method has low per-iteration complexity and allows pre-factorization of the constant system matrix for substantial speedups. Experiments demonstrate that the proposed ADMM approach achieves competitive solution quality with significantly improved computational efficiency compared with existing global solvers for SCLS, particularly in sparse and high-dimensional regimes.

Low-Complexity Algorithm for Stackelberg Prediction Games with Global Optimality

Abstract

Stackelberg prediction games (SPGs) model strategic data manipulation in adversarial learning via a leader--follower interaction between a learner and a self-interested data provider, leading to challenging bilevel optimization problems. Focusing on the least-squares setting (SPG-LS), recent work shows that the bilevel program admits an equivalent spherically constrained least-squares (SCLS) reformulation, which avoids costly conic programming and enables scalable algorithms. In this paper, we develop a simple and efficient alternating direction method of multiplier (ADMM) based solver for the SCLS problem. By introducing a consensus splitting that separates the quadratic objective from the spherical constraint, we obtain an augmented Lagrangian formulation with closed-form updates: the primal quadratic step reduces to solving a fixed shifted linear system, the constraint step is a projection onto the unit sphere, and the dual step is a lightweight scaled ascent. The resulting method has low per-iteration complexity and allows pre-factorization of the constant system matrix for substantial speedups. Experiments demonstrate that the proposed ADMM approach achieves competitive solution quality with significantly improved computational efficiency compared with existing global solvers for SCLS, particularly in sparse and high-dimensional regimes.

Paper Structure

This paper contains 31 sections, 2 theorems, 43 equations, 6 figures, 10 tables, 2 algorithms.

Key Result

Theorem 3.1

The function $f(\mathbf{r})$ is closed, smooth, bounded, and convex. Then, the strong duality for the non-convex problem problem_ADMM holds. $\blacktriangleleft$$\blacktriangleleft$

Figures (6)

  • Figure 1: Illustration of Leader–Follower Interaction in Stackelberg Prediction Games Framework.
  • Figure 2: Comparison of different algorithms on the building dataset. The left and right plots correspond to $\mathcal{A}_{\mathrm{modest}}$ and $\mathcal{A}_{\mathrm{severe}}$, respectively.
  • Figure 3: Convergence Curves on the Wine Modest for Different Values of $\rho$
  • Figure 4: Comparison of different algorithms on the wine dataset. The left and right plots correspond to $\mathcal{A}_{\mathrm{modest}}$ and $\mathcal{A}_{\mathrm{severe}}$, respectively.
  • Figure 5: Comparison of different algorithms on the insurance dataset. The left and right plots correspond to $\mathcal{A}_{\mathrm{modest}}$ and $\mathcal{A}_{\mathrm{severe}}$, respectively.
  • ...and 1 more figures

Theorems & Definitions (10)

  • Theorem 3.1
  • proof
  • Remark 3.2
  • Proposition 3.3
  • proof
  • Remark 3.4
  • Remark 3.5: Cholesky versus explicit matrix inversion
  • Remark 3.6: Decomposition-based implementation
  • proof
  • proof