Table of Contents
Fetching ...

Smoothed Analysis of Algorithms: Why the Simplex Algorithm Usually Takes Polynomial Time

Daniel A. Spielman, Shang-Hua Teng

TL;DR

The paper develops and applies smoothed analysis to explain why the simplex method for linear programming performs well in practice. By modeling inputs as Gaussian perturbations of arbitrary instances and analyzing a two-phase shadow-vertex simplex algorithm, the authors prove polynomial smoothed complexity in the input dimension $d$, the number of constraints $n$, and the inverse perturbation magnitude $1/\sigma$, via a foundational bound on the shadow size of perturbed polytopes. The analysis hinges on a geometric treatment of polytopes through shadow projections, with two-phase reductions ($LP'$ and $LP^{+}$) that manage feasibility and perturbation effects, and leverages condition-number arguments and a detailed probabilistic concentration framework. The results provide a principled explanation for the practical efficiency of the simplex method and establish a pathway for tighter smoothed-time bounds and potential extensions to broader perturbation models.

Abstract

We introduce the smoothed analysis of algorithms, which is a hybrid of the worst-case and average-case analysis of algorithms. In smoothed analysis, we measure the maximum over inputs of the expected performance of an algorithm under small random perturbations of that input. We measure this performance in terms of both the input size and the magnitude of the perturbations. We show that the simplex algorithm has polynomial smoothed complexity.

Smoothed Analysis of Algorithms: Why the Simplex Algorithm Usually Takes Polynomial Time

TL;DR

The paper develops and applies smoothed analysis to explain why the simplex method for linear programming performs well in practice. By modeling inputs as Gaussian perturbations of arbitrary instances and analyzing a two-phase shadow-vertex simplex algorithm, the authors prove polynomial smoothed complexity in the input dimension , the number of constraints , and the inverse perturbation magnitude , via a foundational bound on the shadow size of perturbed polytopes. The analysis hinges on a geometric treatment of polytopes through shadow projections, with two-phase reductions ( and ) that manage feasibility and perturbation effects, and leverages condition-number arguments and a detailed probabilistic concentration framework. The results provide a principled explanation for the practical efficiency of the simplex method and establish a pathway for tighter smoothed-time bounds and potential extensions to broader perturbation models.

Abstract

We introduce the smoothed analysis of algorithms, which is a hybrid of the worst-case and average-case analysis of algorithms. In smoothed analysis, we measure the maximum over inputs of the expected performance of an algorithm under small random perturbations of that input. We measure this performance in terms of both the input size and the magnitude of the perturbations. We show that the simplex algorithm has polynomial smoothed complexity.

Paper Structure

This paper contains 32 sections, 80 theorems, 373 equations, 5 figures.

Key Result

Proposition 2.2.2

For a vector $\boldsymbol{\mathit{x}} \in {\rm I R}^{d}$,

Figures (5)

  • Figure 1: A shadow of a polytope
  • Figure 2: In example (a), $\mathrm{optSimp} = \left\{\left\{\pmb{\mathit{a}}_{1}, \pmb{\mathit{a}}_{2}, \pmb{\mathit{a}}_{3} \right\} \right\}$. In example (b), $\mathrm{optSimp} = \left\{\left\{\pmb{\mathit{a}}_{1}, \pmb{\mathit{a}}_{2}, \pmb{\mathit{a}}_{3} \right\}, \left\{\pmb{\mathit{a}}_{2}, \pmb{\mathit{a}}_{3}, \pmb{\mathit{a}}_{4} \right\} \right\}$. In example (c), $\mathrm{optSimp} = \emptyset$,
  • Figure 3: The change of variables in Lemma \ref{['lem:distanceInPlane']}.
  • Figure 4: The change of variables in Lemma \ref{['lem:aspectRatio']}.
  • Figure 5: $\Gamma_{\boldsymbol{\mathit{u}} , \boldsymbol{\mathit{v}}}$ can be understood as the projection through the origin from one plane onto the other.

Theorems & Definitions (93)

  • Definition 2.2.1: Vector Norms
  • Proposition 2.2.2: Vectors norms
  • Definition 2.2.3: Matrix norm
  • Proposition 2.2.4: Properties of matrix norm
  • Definition 2.2.5: $\hbox{\bf s}_{\textbf{min}}\left( \right)$
  • Proposition 2.2.6: Properties of $\hbox{\bf s}_{\textbf{min}}\left( \right)$
  • Proposition 2.3.1: Average $\leq$ maximum
  • Proposition 2.3.2: Expectation on sub-domain
  • Lemma 2.3.3: Comparing expectations
  • Lemma 2.3.4: Similar distributions
  • ...and 83 more