Computing sharp and scalable bounds on errors in approximate zeros of univariate polynomials
P. H. D. Ramakrishna, Sudebkumar Prasant Pal, Samir Bhalla, Hironmay Basu, Sudhir Kumar Singh
TL;DR
The paper addresses the problem of bounding errors in approximate zeros of univariate polynomials by developing a Rouche's theorem–based a posteriori bound, computable via nonlinear optimization. It introduces two algorithms: Algorithm I uses a fixed-point style search starting from $q(0)$ to find a radius $r$ with $r>q(r)$, and Algorithm II augments this with Newton-Raphson iterations on $p(r)=r-q(r)$ to accelerate convergence, all aided by high-precision LEDA computations. The results show sharp, scalable bounds that improve with better approximations and higher precision, effectively handling polynomials with closely spaced zeros and enabling integration into iterative zero-finding workflows. The approach provides a robust framework for certifying approximate zeros and could extend to more complex root structures and high-degree polynomials in computational mathematics and geometric applications.
Abstract
There are several numerical methods for computing approximate zeros of a given univariate polynomial. In this paper, we develop a simple and novel method for determining sharp upper bounds on errors in approximate zeros of a given polynomial using Rouche's theorem from complex analysis. We compute the error bounds using non-linear optimization. Our bounds are scalable in the sense that we compute sharper error bounds for better approximations of zeros. We use high precision computations using the LEDA/real floating-point filter for computing our bounds robustly.
