Table of Contents
Fetching ...

Computing sharp and scalable bounds on errors in approximate zeros of univariate polynomials

P. H. D. Ramakrishna, Sudebkumar Prasant Pal, Samir Bhalla, Hironmay Basu, Sudhir Kumar Singh

TL;DR

The paper addresses the problem of bounding errors in approximate zeros of univariate polynomials by developing a Rouche's theorem–based a posteriori bound, computable via nonlinear optimization. It introduces two algorithms: Algorithm I uses a fixed-point style search starting from $q(0)$ to find a radius $r$ with $r>q(r)$, and Algorithm II augments this with Newton-Raphson iterations on $p(r)=r-q(r)$ to accelerate convergence, all aided by high-precision LEDA computations. The results show sharp, scalable bounds that improve with better approximations and higher precision, effectively handling polynomials with closely spaced zeros and enabling integration into iterative zero-finding workflows. The approach provides a robust framework for certifying approximate zeros and could extend to more complex root structures and high-degree polynomials in computational mathematics and geometric applications.

Abstract

There are several numerical methods for computing approximate zeros of a given univariate polynomial. In this paper, we develop a simple and novel method for determining sharp upper bounds on errors in approximate zeros of a given polynomial using Rouche's theorem from complex analysis. We compute the error bounds using non-linear optimization. Our bounds are scalable in the sense that we compute sharper error bounds for better approximations of zeros. We use high precision computations using the LEDA/real floating-point filter for computing our bounds robustly.

Computing sharp and scalable bounds on errors in approximate zeros of univariate polynomials

TL;DR

The paper addresses the problem of bounding errors in approximate zeros of univariate polynomials by developing a Rouche's theorem–based a posteriori bound, computable via nonlinear optimization. It introduces two algorithms: Algorithm I uses a fixed-point style search starting from to find a radius with , and Algorithm II augments this with Newton-Raphson iterations on to accelerate convergence, all aided by high-precision LEDA computations. The results show sharp, scalable bounds that improve with better approximations and higher precision, effectively handling polynomials with closely spaced zeros and enabling integration into iterative zero-finding workflows. The approach provides a robust framework for certifying approximate zeros and could extend to more complex root structures and high-degree polynomials in computational mathematics and geometric applications.

Abstract

There are several numerical methods for computing approximate zeros of a given univariate polynomial. In this paper, we develop a simple and novel method for determining sharp upper bounds on errors in approximate zeros of a given polynomial using Rouche's theorem from complex analysis. We compute the error bounds using non-linear optimization. Our bounds are scalable in the sense that we compute sharper error bounds for better approximations of zeros. We use high precision computations using the LEDA/real floating-point filter for computing our bounds robustly.

Paper Structure

This paper contains 12 sections, 5 theorems, 8 equations.

Key Result

Theorem 1.1

ahlforshenricivol1 Suppose the functions $f(z)$ and $g(z)$ are analytic inside and on a simple closed curve $C$. If $f$ and $g$ have no zeros on $C$ and $|f(z)-g(z)|<|f(z)|$ for all $z$ on $C$, then the functions $f(z)$ and $g(z)$ have the same number of zeros inside $C$.

Theorems & Definitions (5)

  • Theorem 1.1
  • Theorem 2.1
  • Lemma 3.1
  • Theorem 3.2
  • Theorem 3.3