Table of Contents
Fetching ...

Expressibility of neural quantum states: a Walsh-complexity perspective

Taige Wang

Abstract

Neural quantum states are powerful variational wavefunctions, but it remains unclear which many-body states can be represented efficiently by modern additive architectures. We introduce Walsh complexity, a basis-dependent measure of how broadly a wavefunction is spread over parity patterns. States with an almost uniform Walsh spectrum require exponentially large Walsh complexity from any good approximant. We show that shallow additive feed-forward networks cannot generate such complexity in the tame regime, e.g. polynomial activations with subexponential parameter scaling. As a concrete example, we construct a simple dimerized state prepared by a single layer of disjoint controlled-$Z$ gates. Although it has only short-range entanglement and a simple tensor-network description, its Walsh complexity is maximal. Full-cube fits across system size and depth are consistent with the complexity bound: for polynomial activations, successful fitting appears only once depth reaches a logarithmic scale in $N$, whereas activation saturation in $\tanh$ produces a sharp threshold-like jump already at depth $3$. Walsh complexity therefore provides an expressibility axis complementary to entanglement and clarifies when depth becomes an essential resource for additive neural quantum states.

Expressibility of neural quantum states: a Walsh-complexity perspective

Abstract

Neural quantum states are powerful variational wavefunctions, but it remains unclear which many-body states can be represented efficiently by modern additive architectures. We introduce Walsh complexity, a basis-dependent measure of how broadly a wavefunction is spread over parity patterns. States with an almost uniform Walsh spectrum require exponentially large Walsh complexity from any good approximant. We show that shallow additive feed-forward networks cannot generate such complexity in the tame regime, e.g. polynomial activations with subexponential parameter scaling. As a concrete example, we construct a simple dimerized state prepared by a single layer of disjoint controlled- gates. Although it has only short-range entanglement and a simple tensor-network description, its Walsh complexity is maximal. Full-cube fits across system size and depth are consistent with the complexity bound: for polynomial activations, successful fitting appears only once depth reaches a logarithmic scale in , whereas activation saturation in produces a sharp threshold-like jump already at depth . Walsh complexity therefore provides an expressibility axis complementary to entanglement and clarifies when depth becomes an essential resource for additive neural quantum states.

Paper Structure

This paper contains 3 sections, 50 equations, 2 figures.

Figures (2)

  • Figure 1: Two canonical examples and their Walsh spectra. (a) Benchmark states $\psi_X$ and $\psi_{XZ}$. (b) Schematic spectra: a single spike for $\psi_X$, a flat spectrum for $\psi_{XZ}$, and a generic few-body profile with weight concentrated at small $|S|$.
  • Figure 2: Exact fitting as an expressibility test. Fitting the bent dimer target $f_{XZ}(\sigma)$ on the full hypercube with hidden width $w=2N$. (a) Additive feed-forward scalar network. (b) Representative full-cube accuracy during training at $N=12$ for the degree-$2$ polynomial activation. (c,e) Final full-cube accuracy of the Boolean readout $\tilde{g}_\theta(\sigma)=\mathrm{sign}(g_\theta(\sigma))$. (d,f) Corresponding Walsh complexity $\log\|\tilde{g}_\theta\|_W$. (c,d) Degree-$2$ polynomial activation. (e,f) $\tanh$ activation. The dashed curve in (c,d) marks the predicted depth scale $D\approx \log N$, and the dashed line in (e,f) marks the threshold depth $D=3$.