Table of Contents
Fetching ...

Adaptive Tensor Network Simulation via Entropy-Feedback PID Control and GPU-Accelerated SVD

Harshni Kumaresan, Gayathri Muruganantham, Lakshmi Rajendran, Santhosh Sivasubramani

Abstract

Tensor network methods, particularly those based on Matrix Product States (MPS), provide a powerful framework for simulating quantum many-body systems. A persistent computational challenge in these methods is the selection of the bond dimension chi, which controls the trade-off between accuracy and computational cost. Fixed bond dimension strategies either waste resources in low-entanglement regions or lose fidelity in high-entanglement regions. This work introduces an adaptive bond dimension management framework that uses von Neumann entropy feedback coupled with a Proportional-Integral-Derivative (PID) controller to dynamically adjust chi at each bond during simulation. An Exponential Moving Average (EMA) filter stabilizes entropy measurements against transient fluctuations, and a predictive scheduling module anticipates future bond dimension requirements from entropy trends. The per-bond granularity of the allocation ensures that computational resources concentrate where entanglement is largest. The framework integrates GPU-accelerated Singular Value Decomposition (SVD) via CuPy and the cuSOLVER backend, achieving individual SVD speedups of 4.1x at chi=256 and 7.1x at chi=2048 relative to CPU-based NumPy for isolated matrix factorisations (measured on an NVIDIA A100-SXM4-40GB GPU with CuPy 13.4.1 and CUDA 12.8). At the system level, benchmarks on the spin-1/2 antiferromagnetic Heisenberg chain demonstrate a 2.7x reduction in total DMRG wall time compared to fixed-chi simulations, with energy accuracy within 0.1% of the Bethe ansatz solution. Integration with the Density Matrix Renormalization Group (DMRG) algorithm yields ground-state energies per site converging to E/N = -0.4432 for the isotropic Heisenberg model at chi = 128. Validation against Amazon Web Services (AWS) Braket SV1 statevector simulator confirms agreement within 2-5% for small systems.

Adaptive Tensor Network Simulation via Entropy-Feedback PID Control and GPU-Accelerated SVD

Abstract

Tensor network methods, particularly those based on Matrix Product States (MPS), provide a powerful framework for simulating quantum many-body systems. A persistent computational challenge in these methods is the selection of the bond dimension chi, which controls the trade-off between accuracy and computational cost. Fixed bond dimension strategies either waste resources in low-entanglement regions or lose fidelity in high-entanglement regions. This work introduces an adaptive bond dimension management framework that uses von Neumann entropy feedback coupled with a Proportional-Integral-Derivative (PID) controller to dynamically adjust chi at each bond during simulation. An Exponential Moving Average (EMA) filter stabilizes entropy measurements against transient fluctuations, and a predictive scheduling module anticipates future bond dimension requirements from entropy trends. The per-bond granularity of the allocation ensures that computational resources concentrate where entanglement is largest. The framework integrates GPU-accelerated Singular Value Decomposition (SVD) via CuPy and the cuSOLVER backend, achieving individual SVD speedups of 4.1x at chi=256 and 7.1x at chi=2048 relative to CPU-based NumPy for isolated matrix factorisations (measured on an NVIDIA A100-SXM4-40GB GPU with CuPy 13.4.1 and CUDA 12.8). At the system level, benchmarks on the spin-1/2 antiferromagnetic Heisenberg chain demonstrate a 2.7x reduction in total DMRG wall time compared to fixed-chi simulations, with energy accuracy within 0.1% of the Bethe ansatz solution. Integration with the Density Matrix Renormalization Group (DMRG) algorithm yields ground-state energies per site converging to E/N = -0.4432 for the isotropic Heisenberg model at chi = 128. Validation against Amazon Web Services (AWS) Braket SV1 statevector simulator confirms agreement within 2-5% for small systems.

Paper Structure

This paper contains 22 sections, 21 equations, 7 figures, 9 tables, 2 algorithms.

Figures (7)

  • Figure 1: System architecture of the adaptive tensor network simulation framework. Singular values $\{\lambda_i\}$ from the MPS core feed into the entropy monitor, which computes EMA-smoothed entropies $\bar{S}_i$. The PID controller generates bond dimension corrections $\Delta\chi_i$, which the bond dimension allocator applies per bond. The GPU SVD engine performs truncated SVD and returns updated site tensors $A_i^{[s]}$ to the MPS core. A predictive scheduler uses entropy trends to anticipate future requirements.
  • Figure 2: Tensor network diagram of a Matrix Product State. Each circle represents a rank-three tensor $A_i^{[s_i]}$. Horizontal lines denote virtual (bond) indices with dimension $\chi_i$, and vertical dashed lines denote physical indices with dimension $d$.
  • Figure 3: PID controller response for a 60-site Heisenberg chain. The bond dimension (blue curve) starts at $\chi = 32$ and converges to a steady-state value of $\chi^* = 236$ within approximately 8 sweeps. The fixed reference $\chi = 256$ (red dashed line) over-provisions resources by $8.5\%$ relative to the adaptively determined value. PID gains tuned via Ziegler-Nichols (see text).
  • Figure 4: Per-bond bond dimension allocation $\chi_i(t)$ for a 40-site Heisenberg chain over 20 DMRG sweeps. Three representative sweeps are shown (sweeps 1, 10, and 20). The bond dimension is largest at central bonds, where entanglement is highest, and smallest near the chain boundaries. The adaptive allocation converges by approximately sweep 15.
  • Figure 5: GPU versus CPU wall-clock time for complete DMRG calculations as a function of system size $N$. The GPU-accelerated adaptive framework (blue bars) achieves consistent speedups over the CPU-only fixed-$\chi$ approach (orange bars) across all system sizes, with the advantage growing for larger systems due to increased parallelism in the SVD operations. Hardware: NVIDIA A100-SXM4-40GB (39.4 GiB HBM2e), CuPy 13.4.1, CUDA 12.8, Google Cloud Vertex AI a2-highgpu-1g.
  • ...and 2 more figures