A Monte Carlo algorithm for efficient large matrix inversion
L. A. Garcia-Cortes, C. Cabrillo
TL;DR
The paper tackles large, potentially non-Hermitian matrix inversion by introducing the Correlated Chains (CC) Monte Carlo algorithm, which updates two coupled vectors $\mathbf{z}$ and $\mathbf{w}$ so that $E(\mathbf{z}\mathbf{w}^{\dagger})=\mathbf{C}^{-1}$. It formalizes update rules with a Gauss-Seidel–type partition and proves convergence under $sp(\mathbf{T})<1$ and $sp(\mathbf{S})<1$, with burn-in determined via a coupling method and error quantified by an effective sample length. Numerical tests on real asymmetric and complex non-Hermitian matrices from genetics and lattice QCD show CC is roughly eight times faster than stochastic estimation while maintaining accuracy. The work highlights CC as a practical, scalable alternative for inverting very large matrices under memory constraints and non-Hermitian settings, with broad applicability in physics and statistics.
Abstract
This paper introduces a new Monte Carlo algorithm to invert large matrices. It is based on simultaneous coupled draws from two random vectors whose covariance is the required inverse. It can be considered a generalization of a previously reported algorithm for hermitian matrices inversion based in only one draw. The use of two draws allows the inversion on non-hermitian matrices. Both the conditions for convergence and the rate of convergence are similar to the Gauss-Seidel algorithm. Results on two examples are presented, a real non-symmetric matrix related to quantitative genetics and a complex non-hermitian matrix relevant for physicists. Compared with other Monte Carlo algorithms it reveals a large reduction of the processing time showing eight times faster processing in the examples studied.
