Table of Contents
Fetching ...

Achieving double-logarithmic precision dependence in optimization-based quantum unstructured search

Zhijian Lai, Dong An, Jiang Hu, Zaiwen Wen

Abstract

Grover's algorithm is a fundamental quantum algorithm that achieves a quadratic speedup for unstructured search problems of size $N$. Recent studies have reformulated this task as a maximization problem on the unitary manifold and solved it via linearly convergent Riemannian gradient ascent (RGA) methods, resulting in a complexity of $O(\sqrt{N}\log (1/\varepsilon))$. In this work, we adopt the Riemannian modified Newton (RMN) method to solve the quantum search problem. We show that, in the setting of quantum search, the Riemannian Newton direction is collinear with the Riemannian gradient in the sense that the Riemannian gradient is always an eigenvector of the corresponding Riemannian Hessian. As a result, without additional overhead, the proposed RMN method numerically achieves a quadratic convergence rate with respect to error $\varepsilon$, implying a complexity of $O(\sqrt{N}\log\log (1/\varepsilon))$, which is double-logarithmic in precision. Furthermore, our approach remains Grover-compatible, namely, it relies exclusively on the standard Grover oracle and diffusion operators to ensure algorithmic implementability, and its parameter update process can be efficiently precomputed on classical computers.

Achieving double-logarithmic precision dependence in optimization-based quantum unstructured search

Abstract

Grover's algorithm is a fundamental quantum algorithm that achieves a quadratic speedup for unstructured search problems of size . Recent studies have reformulated this task as a maximization problem on the unitary manifold and solved it via linearly convergent Riemannian gradient ascent (RGA) methods, resulting in a complexity of . In this work, we adopt the Riemannian modified Newton (RMN) method to solve the quantum search problem. We show that, in the setting of quantum search, the Riemannian Newton direction is collinear with the Riemannian gradient in the sense that the Riemannian gradient is always an eigenvector of the corresponding Riemannian Hessian. As a result, without additional overhead, the proposed RMN method numerically achieves a quadratic convergence rate with respect to error , implying a complexity of , which is double-logarithmic in precision. Furthermore, our approach remains Grover-compatible, namely, it relies exclusively on the standard Grover oracle and diffusion operators to ensure algorithmic implementability, and its parameter update process can be efficiently precomputed on classical computers.

Paper Structure

This paper contains 24 sections, 6 theorems, 63 equations, 5 figures, 2 algorithms.

Key Result

Lemma 1

Let $\psi = |\psi\rangle\langle\psi|$ be any pure state and define $q:= \langle \psi | H | \psi \rangle$. Define two skew-Hermitian operators by $X:= [H, \psi]$ and $Y:= i[H, X]$. Then $\|X\|_F = \|Y\|_F = \sqrt{2q(1-q)}$ and $\langle X, Y \rangle = 0$.

Figures (5)

  • Figure 1: Schematic illustration of a manifold optimization iteration on the unitary manifold $\mathrm{U}(N)$. Starting from the current point $U_k$, a tangent direction $\eta_k \in T_{U_k}$ (e.g., the Riemannian gradient) is chosen in the tangent space. The retraction $\mathrm{R}_{U_k}$ then maps the scaled tangent vector $t_k \eta_k$ back onto the manifold, producing the next iterate $U_{k+1} = \mathrm{R}_{U_k}(t_k \eta_k)$.
  • Figure 2: Absolute errors of the cost value $q_k$ and expansion coefficients $x_k, y_k$ between the classical simulation and the explicit full matrix implementation. (a)--(c) display the results for RGA, and (d)--(f) for RMN. All errors remain around machine precision ($10^{-16}$), verifying that the classical procedures accurately simulate the algorithms.
  • Figure 3: Convergence comparison between the RGA and RMN methods for problem sizes of $n=5$, $10$, and $15$ qubits. (a)--(c) illustrate the gradient norm, and (d)--(f) show the function value error. The results demonstrate the linear convergence of RGA and the significantly faster, quadratic convergence of RMN.
  • Figure 4: Iteration complexity of RMN versus the square root of the problem size. For a fixed tolerance $\varepsilon$ and varying qubit counts $n \in [2, 28]$ ($N=2^n$), the required iterations for RMN scale linearly with $\sqrt{N}$. This confirms that the second-order RMN method successfully retains the $\mathcal{O}(\sqrt{N})$ quantum speedup.
  • Figure 5: Riemannian gradient on the manifold. Viewing $\mathrm{U}(N)$ as a sphere for illustration, the Riemannian gradient $\operatorname{grad} f(U)$ is the orthogonal projection of the Euclidean gradient $\nabla f(U)$ onto the tangent space at $U$. It is perpendicular to the contour and points in the direction of the fastest increase of $f$.

Theorems & Definitions (12)

  • Remark 1
  • Lemma 1: lai2025grover
  • Theorem 1: Invariant 2D gradient subspace lai2025grover
  • Definition 1
  • Example 1: $5$-factor retraction
  • Remark 2
  • Theorem 2: Classical simulability of \ref{['alg-grover-ret']} lai2025grover
  • Lemma 2: lai2026quantum
  • Theorem 3
  • Remark 3
  • ...and 2 more