Table of Contents
Fetching ...

Faster Inversion and Other Black Box Matrix Computations Using Efficient Block Projections

Wayne Eberly, Mark Giesbrecht, Pascal Giorgi, Arne Storjohann, Gilles Villard

TL;DR

This work resolves the conjecture that efficient block projections exist for sparse linear systems by proving their existence over sufficiently large fields and leveraging a structured block Krylov framework. It introduces a fast inverse-factorization based on a block-Krylov decomposition and a block-Hankel matrix, enabling sub-cubic inversion costs and applicability to black-box matrices via Las Vegas randomized methods. The authors extend these techniques to compute nullspaces, ranks, and inverse-multiplications $A^{-1}M$, and apply them to sparse rational systems to obtain unconditional Las Vegas solvers for $A^{-1}b$ with favorable bit- and matrix-vector product complexities. They also discuss practical consequences for determinant and Smith form computations in the sparse regime. Overall, the paper delivers new theoretical guarantees and algorithmic tools that significantly improve asymptotic efficiency for exact sparse linear algebra and black-box matrix computations.

Abstract

Block projections have been used, in [Eberly et al. 2006], to obtain an efficient algorithm to find solutions for sparse systems of linear equations. A bound of softO(n^(2.5)) machine operations is obtained assuming that the input matrix can be multiplied by a vector with constant-sized entries in softO(n) machine operations. Unfortunately, the correctness of this algorithm depends on the existence of efficient block projections, and this has been conjectured. In this paper we establish the correctness of the algorithm from [Eberly et al. 2006] by proving the existence of efficient block projections over sufficiently large fields. We demonstrate the usefulness of these projections by deriving improved bounds for the cost of several matrix problems, considering, in particular, ``sparse'' matrices that can be be multiplied by a vector using softO(n) field operations. We show how to compute the inverse of a sparse matrix over a field F using an expected number of softO(n^(2.27)) operations in F. A basis for the null space of a sparse matrix, and a certification of its rank, are obtained at the same cost. An application to Kaltofen and Villard's Baby-Steps/Giant-Steps algorithms for the determinant and Smith Form of an integer matrix yields algorithms requiring softO(n^(2.66)) machine operations. The derived algorithms are all probabilistic of the Las Vegas type.

Faster Inversion and Other Black Box Matrix Computations Using Efficient Block Projections

TL;DR

This work resolves the conjecture that efficient block projections exist for sparse linear systems by proving their existence over sufficiently large fields and leveraging a structured block Krylov framework. It introduces a fast inverse-factorization based on a block-Krylov decomposition and a block-Hankel matrix, enabling sub-cubic inversion costs and applicability to black-box matrices via Las Vegas randomized methods. The authors extend these techniques to compute nullspaces, ranks, and inverse-multiplications , and apply them to sparse rational systems to obtain unconditional Las Vegas solvers for with favorable bit- and matrix-vector product complexities. They also discuss practical consequences for determinant and Smith form computations in the sparse regime. Overall, the paper delivers new theoretical guarantees and algorithmic tools that significantly improve asymptotic efficiency for exact sparse linear algebra and black-box matrix computations.

Abstract

Block projections have been used, in [Eberly et al. 2006], to obtain an efficient algorithm to find solutions for sparse systems of linear equations. A bound of softO(n^(2.5)) machine operations is obtained assuming that the input matrix can be multiplied by a vector with constant-sized entries in softO(n) machine operations. Unfortunately, the correctness of this algorithm depends on the existence of efficient block projections, and this has been conjectured. In this paper we establish the correctness of the algorithm from [Eberly et al. 2006] by proving the existence of efficient block projections over sufficiently large fields. We demonstrate the usefulness of these projections by deriving improved bounds for the cost of several matrix problems, considering, in particular, ``sparse'' matrices that can be be multiplied by a vector using softO(n) field operations. We show how to compute the inverse of a sparse matrix over a field F using an expected number of softO(n^(2.27)) operations in F. A basis for the null space of a sparse matrix, and a certification of its rank, are obtained at the same cost. An application to Kaltofen and Villard's Baby-Steps/Giant-Steps algorithms for the determinant and Smith Form of an integer matrix yields algorithms requiring softO(n^(2.66)) machine operations. The derived algorithms are all probabilistic of the Las Vegas type.

Paper Structure

This paper contains 7 sections, 11 theorems, 23 equations, 1 table.

Key Result

Theorem 2.1

If the leading $ks \times ks$ minor of $A$ is non-zero for $1\leq k\leq m$, then ${\mathcal{K}}_m({\mathcal{D}} A {\mathcal{D}},u) \in {\sf F} ^{n \times n}$ is invertible.

Theorems & Definitions (13)

  • Theorem 2.1
  • proof
  • Corollary 2.2
  • Corollary 2.3
  • Corollary 3.1
  • Theorem 4.1
  • Theorem 4.2
  • Remark 4.3
  • Corollary 4.4
  • Theorem 5.1
  • ...and 3 more