Smoothed Analysis of Algorithms: Why the Simplex Algorithm Usually Takes Polynomial Time
Daniel A. Spielman, Shang-Hua Teng
TL;DR
The paper develops and applies smoothed analysis to explain why the simplex method for linear programming performs well in practice. By modeling inputs as Gaussian perturbations of arbitrary instances and analyzing a two-phase shadow-vertex simplex algorithm, the authors prove polynomial smoothed complexity in the input dimension $d$, the number of constraints $n$, and the inverse perturbation magnitude $1/\sigma$, via a foundational bound on the shadow size of perturbed polytopes. The analysis hinges on a geometric treatment of polytopes through shadow projections, with two-phase reductions ($LP'$ and $LP^{+}$) that manage feasibility and perturbation effects, and leverages condition-number arguments and a detailed probabilistic concentration framework. The results provide a principled explanation for the practical efficiency of the simplex method and establish a pathway for tighter smoothed-time bounds and potential extensions to broader perturbation models.
Abstract
We introduce the smoothed analysis of algorithms, which is a hybrid of the worst-case and average-case analysis of algorithms. In smoothed analysis, we measure the maximum over inputs of the expected performance of an algorithm under small random perturbations of that input. We measure this performance in terms of both the input size and the magnitude of the perturbations. We show that the simplex algorithm has polynomial smoothed complexity.
