Table of Contents
Fetching ...

Finite-Time Convergence Guarantees for Time-Parallel Methods

Giancarlo Antonino Antonucci, Raphael Andreas Hauser, Debasmita Samaddar, James Buchanan

Abstract

Time-parallel algorithms, such as Parareal, are well-understood for linear problems, but their convergence analysis for nonlinear, chaotic systems remains limited. This paper introduces a new theoretical framework for analysing time-decomposition methods as contraction mappings that converge in a finite number of iterations. We derive a finite-time guarantee linking the initial error, convergence rate, and iteration count, defined via a geometric outer--inner-ball condition. We apply this framework to Parareal, deriving explicit estimates for the convergence factor $β$ on nonlinear problems and showing it scales as $\mathcal{O}(h^2)$ when the macroscopic time grid is uniformly refined. Further, we address the failure of standard convergence criteria in chaotic regimes by introducing a proximity function. This chaos-aware criterion weighs solution discontinuities by the system's Lyapunov exponent (or the solver's Lipschitz constant), allowing the algorithm to converge to the correct statistical attractor without enforcing futile pointwise accuracy on divergent trajectories. Numerical experiments on the Logistic, Lorenz, and Lorenz-96 systems demonstrate that this approach decouples the iteration count from the total simulation time. By isolating the intrinsic mathematical bounds from hardware-dependent overheads, we establish that the method is strictly algorithmically scalable.

Finite-Time Convergence Guarantees for Time-Parallel Methods

Abstract

Time-parallel algorithms, such as Parareal, are well-understood for linear problems, but their convergence analysis for nonlinear, chaotic systems remains limited. This paper introduces a new theoretical framework for analysing time-decomposition methods as contraction mappings that converge in a finite number of iterations. We derive a finite-time guarantee linking the initial error, convergence rate, and iteration count, defined via a geometric outer--inner-ball condition. We apply this framework to Parareal, deriving explicit estimates for the convergence factor on nonlinear problems and showing it scales as when the macroscopic time grid is uniformly refined. Further, we address the failure of standard convergence criteria in chaotic regimes by introducing a proximity function. This chaos-aware criterion weighs solution discontinuities by the system's Lyapunov exponent (or the solver's Lipschitz constant), allowing the algorithm to converge to the correct statistical attractor without enforcing futile pointwise accuracy on divergent trajectories. Numerical experiments on the Logistic, Lorenz, and Lorenz-96 systems demonstrate that this approach decouples the iteration count from the total simulation time. By isolating the intrinsic mathematical bounds from hardware-dependent overheads, we establish that the method is strictly algorithmically scalable.

Paper Structure

This paper contains 22 sections, 5 theorems, 31 equations, 3 figures, 3 tables.

Key Result

Proposition 2.1

Assume that a time-parallel algorithm given by the iterative scheme in eq:iteration has the property that, starting from any initial guess $U^0$, it recovers the fine solution exactly after at most $N$ iterations. Then $U^*$ is the unique fixed-point of $v$. $\blacktriangleleft$$\blacktriangleleft$

Figures (3)

  • Figure 1: Structure of the basin of attraction for the Lorenz-63 system. The plot shows a 2D cross-section of the 3D phase space (the first coordinate of $U_n$, here). The colour map denotes the value of $\|U_n - \mathcal{F}(U_{n-1})\|$. As the integration time step increases, the basin develops a complex, fractal-like structure, highlighting the practical difficulty of guaranteeing convergence for arbitrary initial guesses.
  • Figure 1: Number of Parareal iterations $K$ to convergence for the Lorenz system, comparing the standard check (blue) with the weighted proximity functions. Here, (left) $\xi = 10$, (right) $\xi = 100$. (top) $T$ increasing, $N$ fixed; (middle) $T$ fixed, $N$ increasing; (bottom) $T$ increasing linearly with $N$. In all scenarios, the weighted proximity functions maintain a small and nearly constant number of iterations, effectively decoupling the algorithmic cost from the problem scale. Note that the weighted curves occasionally snap back to the standard check (visible on the right); this numerical artefact occurs when a coarse update serendipitously lands close enough to the fine trajectory to briefly satisfy the strict unweighted tolerance before chaotic divergence resumes.
  • Figure 2: Effective serial work $K \Delta T$, which represents the sequential bottleneck of the Parareal algorithm. Here, (left) $\xi = 10$, (right) $\xi = 100$. (top) $T$ increasing, $N$ fixed; (middle) $T$ fixed, $N$ increasing; (bottom) $T$ increasing linearly with $N$. Provided the coarse solver is sufficiently accurate (left), the weighted methods ensure the serial portion of the algorithm does not grow with the overall problem size. However, as seen on the right, aggressive coarsening ($\xi=100$) triggers the numerical snap-back artefact, causing the sequential bottleneck to revert to the linear growth of the standard check.

Theorems & Definitions (12)

  • Proposition 2.1
  • Proof 1
  • Definition 3.1: Q-convergence
  • Theorem 3.2: Sufficient Conditions for Convergence
  • Theorem 4.1
  • Proof 2
  • Theorem 4.2
  • Proof 3
  • Remark 4.3
  • Remark 4.4
  • ...and 2 more