Table of Contents
Fetching ...

PhaseFlow4D: Physically Constrained 4D Beam Reconstruction via Feedback-Guided Latent Diffusion

Alexander Scheinker, Alexander Plastun, Peter Ostroumov

Abstract

We address the problem of recovering a time-varying 4D distribution from a sparse sequence of 2D projections - analogous to novel-view synthesis from sparse cameras, but applied to the 4D transverse phase space density $ρ(x,p_x,y,p_y)$ of charged particle beams. Direct single shot measurement of this high-dimensional distribution is physically impossible in real particle accelerator systems; only limited 1D or 2D projections are accessible. We propose PhaseFlow4D, a feedback-guided latent diffusion model that reconstructs and tracks the full 4D phase space from incomplete 2D observations alone, with built-in hard physics constraints. Our core technical contribution is a 4D VAE whose decoder generates the full 4D phase space tensor, from which 2D projections are analytically computed and compared against 2D beam measurements. This projection-consistency constraint guarantees physical correctness by construction - not as a soft penalty, but as an architectural prior. An adaptive feedback loop then continuously tunes the conditioning vector of the latent diffusion model to track time-varying distributions online without retraining. We validate on multi-particle simulations of heavy-ion beams at the Facility for Rare Isotope Beams (FRIB), where full physics simulations require $\sim$6 hours on a 100-core HPC system. PhaseFlow4D achieves accurate 4D reconstructions 11000$\times$ faster while faithfully tracking distribution shifts under time-varying source conditions - demonstrating that principled generative reconstruction under incomplete observations transfers robustly beyond visual domains.

PhaseFlow4D: Physically Constrained 4D Beam Reconstruction via Feedback-Guided Latent Diffusion

Abstract

We address the problem of recovering a time-varying 4D distribution from a sparse sequence of 2D projections - analogous to novel-view synthesis from sparse cameras, but applied to the 4D transverse phase space density of charged particle beams. Direct single shot measurement of this high-dimensional distribution is physically impossible in real particle accelerator systems; only limited 1D or 2D projections are accessible. We propose PhaseFlow4D, a feedback-guided latent diffusion model that reconstructs and tracks the full 4D phase space from incomplete 2D observations alone, with built-in hard physics constraints. Our core technical contribution is a 4D VAE whose decoder generates the full 4D phase space tensor, from which 2D projections are analytically computed and compared against 2D beam measurements. This projection-consistency constraint guarantees physical correctness by construction - not as a soft penalty, but as an architectural prior. An adaptive feedback loop then continuously tunes the conditioning vector of the latent diffusion model to track time-varying distributions online without retraining. We validate on multi-particle simulations of heavy-ion beams at the Facility for Rare Isotope Beams (FRIB), where full physics simulations require 6 hours on a 100-core HPC system. PhaseFlow4D achieves accurate 4D reconstructions 11000 faster while faithfully tracking distribution shifts under time-varying source conditions - demonstrating that principled generative reconstruction under incomplete observations transfers robustly beyond visual domains.

Paper Structure

This paper contains 14 sections, 9 equations, 20 figures.

Figures (20)

  • Figure 1: A: FRIB accelerator injector beam plasma source and charge selection system. B: 4D phase space density $\rho(x,x',y,y')$ of beam initial conditions. Simulating complex space charge-dominated beam dynamics of 13 beam species computationally expensive (6 hours). 4D VAE encodes $128^4$ 4D density into a low-dimensional latent representation $16\times 16 \times 4$. C: Latent diffusion conditional input based on beamline settings and non-invasive measurements. D: Latent diffusion maps beamline conditions to full 4D phase space density from which physically consistent 2D projections are made. E: A single generated 2D projection compared with measurement and the difference ( F) is minimized by adaptive conditional vector tuning. G: Time-Varying 4D phase space distribution tracked based on 2D measurements.
  • Figure 2: Examples of conditional latent diffusion-generated latent embeddings and all 6 unique 2D projections associated with the 4D phase space densities that those latent images are decoded to by the VAE's decoder.
  • Figure 3: 4D VAE architecture. Top: the encoder compresses the $128^4$ phase space tensor to a compact latent code $\mathbf{z}$. Bottom: the decoder reconstructs $\hat{X}$, from which all 2D marginal projections can be computed analytically and compared to the ground-truth projections during training.
  • Figure 4: The conditional latent diffusion architecture is a standard U-Net approach with 3 residual blocks at each resolution, GroupNorm, and attention and 100 denoising steps.
  • Figure 5: The conditional latent diffusion generative process is shown at various diffusion steps for different beam conditions. In this image, we show only the first three channels, as RGB values, of each $16 \times 16 \times 4$ latent images.
  • ...and 15 more figures