Table of Contents
Fetching ...

LieTrunc-QNN: Lie Algebra Truncation and Quantum Expressivity Phase Transition from LiePrune to Provably Stable Quantum Neural Networks

Haijian Shao, Dalong Zhao, Xing Deng, Wenzheng Zhu, Yingtao Jiang

Abstract

Quantum Machine Learning (QML) is fundamentally limited by two challenges: barren plateaus (exponentially vanishing gradients) and the fragility of parameterized quantum circuits under noise. Despite extensive empirical studies, a unified theoretical framework remains lacking. We introduce LieTrunc-QNN, an algebraic-geometric framework that characterizes trainability via Lie-generated dynamics. Parameterized quantum circuits are modeled as Lie subalgebras of u(2^n), whose action induces a Riemannian manifold of reachable quantum states. Expressivity is reinterpreted as intrinsic manifold dimension and geometry. We establish a geometric capacity-plateau principle: increasing effective dimension leads to exponential gradient suppression due to concentration of measure. By restricting to structured Lie subalgebras (LieTrunc), the manifold is contracted, preventing concentration and preserving non-degenerate gradients. We prove two main results: (1) a trainability lower bound for LieTrunc-QNN, and (2) that the Fubini-Study metric rank is bounded by the algebraic span of generators, showing expressivity is governed by structure rather than parameter count. Compact Lie subalgebras also provide inherent robustness to perturbations. Importantly, we establish a polynomial trainability regime where gradient variance decays polynomially instead of exponentially. Experiments (n=2-6) validate the theory: LieTrunc-QNN maintains stable gradients and high effective dimension, while random truncation leads to metric rank collapse. At n=6, full metric rank is preserved (rank=16). Results support a scaling law between gradient variance and effective dimension. This work provides a unified geometric framework for QNN design, linking Lie algebra, manifold geometry, and optimization.

LieTrunc-QNN: Lie Algebra Truncation and Quantum Expressivity Phase Transition from LiePrune to Provably Stable Quantum Neural Networks

Abstract

Quantum Machine Learning (QML) is fundamentally limited by two challenges: barren plateaus (exponentially vanishing gradients) and the fragility of parameterized quantum circuits under noise. Despite extensive empirical studies, a unified theoretical framework remains lacking. We introduce LieTrunc-QNN, an algebraic-geometric framework that characterizes trainability via Lie-generated dynamics. Parameterized quantum circuits are modeled as Lie subalgebras of u(2^n), whose action induces a Riemannian manifold of reachable quantum states. Expressivity is reinterpreted as intrinsic manifold dimension and geometry. We establish a geometric capacity-plateau principle: increasing effective dimension leads to exponential gradient suppression due to concentration of measure. By restricting to structured Lie subalgebras (LieTrunc), the manifold is contracted, preventing concentration and preserving non-degenerate gradients. We prove two main results: (1) a trainability lower bound for LieTrunc-QNN, and (2) that the Fubini-Study metric rank is bounded by the algebraic span of generators, showing expressivity is governed by structure rather than parameter count. Compact Lie subalgebras also provide inherent robustness to perturbations. Importantly, we establish a polynomial trainability regime where gradient variance decays polynomially instead of exponentially. Experiments (n=2-6) validate the theory: LieTrunc-QNN maintains stable gradients and high effective dimension, while random truncation leads to metric rank collapse. At n=6, full metric rank is preserved (rank=16). Results support a scaling law between gradient variance and effective dimension. This work provides a unified geometric framework for QNN design, linking Lie algebra, manifold geometry, and optimization.

Paper Structure

This paper contains 22 sections, 10 theorems, 18 equations, 4 figures, 1 table, 1 algorithm.

Key Result

Proposition 3.3

For any state $|\psi\rangle\in\mathcal{M}$, the tangent space of the reachable manifold at $|\psi\rangle$ is where the quotient removes global phase directions corresponding to the $\mathrm{U}(1)$ fiber of $\mathbb{CP}^{2^n-1}$. $\blacktriangleleft$$\blacktriangleleft$

Figures (4)

  • Figure 1: Left: gradient variance vs. qubit number; middle: effective dimension $d_{\text{eff}}$; right: variance--effective dimension product. LieTrunc stabilizes gradients while preserving expressivity.
  • Figure 2: Fubini--Study metric eigenvalue spectra across architectures. RandomTrunc shows spectral collapse; LieTrunc maintains full structure.
  • Figure 3: Task loss across qubit counts. LieTrunc-QNN achieves stable, competitive performance.
  • Figure 4: Fubini--Study metric spectrum at $n=6$. RandomTrunc collapses; LieTrunc remains full-rank.

Theorems & Definitions (19)

  • Definition 3.1: Reachable Manifold of PQC
  • Definition 3.2: Circuit Lie Algebra
  • Proposition 3.3: Tangent Space Characterization
  • proof
  • Definition 3.4: Fubini--Study Pullback Metric
  • Proposition 3.5: Gradient Chain Rule with SVD
  • Definition 3.6: Effective Geometric Dimension
  • Definition 3.7: Practical Effective Dimension Estimation
  • Proposition 3.8: Universal Gradient Variance Scaling
  • Theorem 3.9: Geometric Origin of Barren Plateaus
  • ...and 9 more