The complexity of accurate floating point computation
James Demmel
TL;DR
The paper addresses the problem of accurately and efficiently evaluating multivariate rational expressions and matrix factorizations when entries are given by such expressions, introducing CAE (accurate and efficient) and three arithmetic models: TM ($fl(a\otimes b)=(a\otimes b)(1+\delta)$, $|\delta|\le\epsilon$), LEM, and SEM. It develops a framework linking CAE to factorizability and minors, shows how determinant and minor computations underpin LU, SVD, and eigenvalue problems, and analyzes how these results differ across models; it also introduces the notion of relative perturbation theory and conjectures (A) that characterize when CAE is possible in the TM, while extending the discussion to LEM and SEM with factored-form rational functions and sparse arithmetic. Key contributions include the classification of CAE feasibility in TM, the conditional results for TP matrices and generalized Vandermonde structures, and the proposed conjectures about relative perturbation theory and the role of minors in CAE. The work clarifies fundamental limits of accurate floating point computation and informs algorithm design for reliable numeric linear algebra across realistic floating point models, with implications for both theory and practice.
Abstract
Our goal is to find accurate and efficient algorithms, when they exist, for evaluating rational expressions containing floating point numbers, and for computing matrix factorizations (like LU and the SVD) of matrices with rational expressions as entries. More precisely, {\em accuracy} means the relative error in the output must be less than one (no matter how tiny the output is), and {\em efficiency} means that the algorithm runs in polynomial time. Our goal is challenging because our accuracy demand is much stricter than usual.
