Table of Contents
Fetching ...

On the Geometry of Graeffe Iteration

Gregorio Malajovich, Jorge P. Zubelli

TL;DR

The paper tackles robust root-modulus extraction for univariate polynomials by addressing instability in Graeffe iterations through Renormalized Graeffe Iteration, which operates in scaled polar coordinates to keep all quantities bounded. It develops a formal renormalization framework for iterative algorithms, proves probabilistic convergence with $O(d^2)$ time and $O(d)$ memory per iteration, and provides complexity bounds under random polynomial models, notably Kostlan distributions. The approach yields significant stability and scalability, with numerical evidence solving random polynomials up to degree $1000$ and insights from Newton diagrams into factor structure. Together, these results offer a practical, probabilistically sound alternative to classical Graeffe methods for estimating root moduli and pave the way for full root recovery via subsequent steps.

Abstract

A new version of the Graeffe algorithm for finding all the roots of univariate complex polynomials is proposed. It is obtained from the classical algorithm by a process analogous to renormalization of dynamical systems. This iteration is called Renormalized Graeffe Iteration. It is globally convergent, with probability 1. All quantities involved in the computation are bounded, once the initial polynomial is given (with probability 1). This implies remarkable stability properties for the new algorithm, thus overcoming known limitations of the classical Graeffe algorithm. If we start with a degree-$d$ polynomial, each renormalized Graeffe iteration costs $O(d^2)$ arithmetic operations, with memory $O(d)$. A probabilistic global complexity bound is given. The case of univariate real polynomials is briefly discussed. A numerical implementation of the algorithm presented herein allowed us to solve random polynomials of degree up to 1000.

On the Geometry of Graeffe Iteration

TL;DR

The paper tackles robust root-modulus extraction for univariate polynomials by addressing instability in Graeffe iterations through Renormalized Graeffe Iteration, which operates in scaled polar coordinates to keep all quantities bounded. It develops a formal renormalization framework for iterative algorithms, proves probabilistic convergence with time and memory per iteration, and provides complexity bounds under random polynomial models, notably Kostlan distributions. The approach yields significant stability and scalability, with numerical evidence solving random polynomials up to degree and insights from Newton diagrams into factor structure. Together, these results offer a practical, probabilistically sound alternative to classical Graeffe methods for estimating root moduli and pave the way for full root recovery via subsequent steps.

Abstract

A new version of the Graeffe algorithm for finding all the roots of univariate complex polynomials is proposed. It is obtained from the classical algorithm by a process analogous to renormalization of dynamical systems. This iteration is called Renormalized Graeffe Iteration. It is globally convergent, with probability 1. All quantities involved in the computation are bounded, once the initial polynomial is given (with probability 1). This implies remarkable stability properties for the new algorithm, thus overcoming known limitations of the classical Graeffe algorithm. If we start with a degree- polynomial, each renormalized Graeffe iteration costs arithmetic operations, with memory . A probabilistic global complexity bound is given. The case of univariate real polynomials is briefly discussed. A numerical implementation of the algorithm presented herein allowed us to solve random polynomials of degree up to 1000.

Paper Structure

This paper contains 7 sections, 13 theorems, 81 equations, 6 figures.

Key Result

Theorem 1

There is a renormalization of the Graeffe iteration, such that if $f$ is a degree-$d$ polynomial (in a measure theoretical sense) then with probability 1 this renormalized Graeffe iteration produces $d+1$ sequences, each one converging to some $h_i$, s.t. $\log |\zeta_{i}| = h_i - h_{i+1}$, and $\ze

Figures (6)

  • Figure 1: Examples of Newton diagrams
  • Figure 2: Zeros of a random polynomial
  • Figure 3: Timing for 100 random real polynomials
  • Figure 4: Timing for 100 random complex polynomials
  • Figure 5: Separation for 100 random real polynomials
  • ...and 1 more figures

Theorems & Definitions (30)

  • Theorem 1
  • Theorem 2
  • Example 1
  • Definition 1
  • Remark 1
  • Remark 2
  • Definition 2
  • Definition 3
  • Lemma 1
  • proof : Proof of Lemma \ref{['lemE']}
  • ...and 20 more