Table of Contents
Fetching ...

Covariance-Domain Near-Field Channel Estimation under Hybrid Compression: USW/Fresnel Model, Curvature Learning, and KL Covariance Fitting

Rıfat Volkan Şenyuva

Abstract

Near-field propagation in extremely large aperture arrays requires joint angle-range estimation. In hybrid architectures, only $N_\mathrm{RF}\ll M$ compressed snapshots are available per slot, making the $N_\mathrm{RF}\times N_\mathrm{RF}$ compressed sample covariance the natural sufficient statistic. We propose the Curvature-Learning KL (CL-KL) estimator, which grids only the angle dimension and \emph{learns the per-angle inverse range} directly from the compressed covariance via KL divergence minimisation. CL-KL uses a $Q_θ$-element dictionary instead of the $Q_θQ_r$ atoms of 2-D polar gridding, eliminating the range-dimension dictionary coherence that plagues polar codebooks in the strong near-field regime, and operates entirely on the compressed covariance for full compatibility with hybrid front-ends. At $N_\mathrm{MC}=400$ ($f_c=28$~GHz, $M=64$, $N_\mathrm{RF}=8$, $N=64$, $d=3$, $r\in[0.05,1.0]\,r_\mathrm{RD}$), CL-KL achieves the lowest channel NMSE among all six evaluated methods -- including four full-array baselines using $64\times$ more data -- at $\mathrm{SNR}\in\{-5,0,+5,+10\}$~dB. Running in approximately 70~ms per trial (vs.\ 5~ms for the compressed-domain peer P-SOMP), CL-KL's dominant cost is the $N_\mathrm{RF}{\times}N_\mathrm{RF}$ inversion rather than $M$: measured runtime stays near 70~ms across $M\in\{32,64,128,256\}$, making it aperture-scalable for XL-MIMO deployments. CL-KL is further validated against a derived compressed-domain Cramér-Rao bound and confirmed robust to non-Gaussian (QPSK) source distributions, with a maximum NMSE gap below 0.6~dB.

Covariance-Domain Near-Field Channel Estimation under Hybrid Compression: USW/Fresnel Model, Curvature Learning, and KL Covariance Fitting

Abstract

Near-field propagation in extremely large aperture arrays requires joint angle-range estimation. In hybrid architectures, only compressed snapshots are available per slot, making the compressed sample covariance the natural sufficient statistic. We propose the Curvature-Learning KL (CL-KL) estimator, which grids only the angle dimension and \emph{learns the per-angle inverse range} directly from the compressed covariance via KL divergence minimisation. CL-KL uses a -element dictionary instead of the atoms of 2-D polar gridding, eliminating the range-dimension dictionary coherence that plagues polar codebooks in the strong near-field regime, and operates entirely on the compressed covariance for full compatibility with hybrid front-ends. At (~GHz, , , , , ), CL-KL achieves the lowest channel NMSE among all six evaluated methods -- including four full-array baselines using more data -- at ~dB. Running in approximately 70~ms per trial (vs.\ 5~ms for the compressed-domain peer P-SOMP), CL-KL's dominant cost is the inversion rather than : measured runtime stays near 70~ms across , making it aperture-scalable for XL-MIMO deployments. CL-KL is further validated against a derived compressed-domain Cramér-Rao bound and confirmed robust to non-Gaussian (QPSK) source distributions, with a maximum NMSE gap below 0.6~dB.

Paper Structure

This paper contains 34 sections, 30 equations, 11 figures, 7 tables, 1 algorithm.

Figures (11)

  • Figure 2: Channel NMSE (dB) vs. SNR ($N_\mathrm{MC}=400$, $M=64$, $N_\mathrm{RF}=8$, $N=64$, $d=3$). Filled: compressed covariance ($N_\mathrm{RF}^2=64$ values). Open: full snapshot matrix ($M{\times}N=4096$ values). CL-KL leads all six methods at $\mathrm{SNR}\in\{-5,0,+5,+10\}$ dB despite operating on $64\times$ less data.
  • Figure 3: Channel NMSE (dB) vs. $N_\mathrm{RF}\in\{4,8,12,16\}$ ($N_\mathrm{MC}=400$, SNR$=+10$ dB, $M=64$, $N=64$, $d=3$). Filled: compressed-domain. Open: full-array. CL-KL gains $\approx5$ dB per doubling of $N_\mathrm{RF}$; DL-OMP plateaus at $-6.64$ dB due to the two-subarray bottleneck.
  • Figure 4: Channel NMSE (dB) vs. $N\in\{16,32,64,128\}$ ($N_\mathrm{MC}=400$, SNR$=+10$ dB, $M=64$, $N_\mathrm{RF}=8$, $d=3$). Filled: compressed-domain. Open: full-array. CL-KL is nearly flat ($\widehat{\bm{R}}_y$ compresses $N$ snapshots to $N_\mathrm{RF}^2=64$ entries); full-array methods improve more with $N$.
  • Figure 5: Near-to-far transition: RMSE$(r)$ (left), RMSE$(\theta)$ (centre), and NMSE (right) vs. $r_\mathrm{max}/r_\mathrm{RD}$ ($N_\mathrm{MC}=400$, SNR$=+10$ dB, $M=64$, $N_\mathrm{RF}=8$, $d=3$; $r_\mathrm{min}=0.05\,r_\mathrm{RD}$ fixed). Vertical markers: EBRD (dashed purple) and $r_\mathrm{RD}$ (dashed black). Filled: compressed-domain. Open: full-array. CL-KL NMSE varies by less than 1.2 dB over the full sweep, confirming that the compressed covariance budget governs performance.
  • Figure 6: Per-trial runtime (s, left) and channel NMSE (dB, right) vs. $M\in\{32,64,128,256\}$ ($N_\mathrm{MC}=50$, single-core, SNR$=+10$ dB, $N_\mathrm{RF}=8$, $N=64$, $d=3$). CL-KL runtime stays near 70 ms across all tested $M$, confirming that its dominant cost is the $N_\mathrm{RF}{\times}N_\mathrm{RF}$ inversion, not $M$; P-SOMP and BF-SOMP scale more steeply as their dictionaries grow with $M$.
  • ...and 6 more figures