Table of Contents
Fetching ...

Near-field Beam Training under Multi-path Channels: A Hybrid Learning-and-Optimization Approach

Jiapeng Li, Changsheng You, Guoliang Cheng, Haobin Sun, Chao Zhou, Linglong Dai

Abstract

For extremely large-scale arrays (XL-arrays), the discrete Fourier transform (DFT) codebook, conventionally used in the far-field, has recently been employed for near-field beam training. However, most existing methods rely on the line-of-sight (LoS) dominant channel assumption, which may suffer degraded communication performance when applied to the general multi-path scenario due to the more complex received signal power pattern at the user. To address this issue, we propose in this paper a new hybrid learning-and-optimization-based beam training method that first leverages deep learning (DL) to obtain coarse channel parameter estimates, and then refines them via a model-based optimization algorithm, hence achieving high-accuracy estimation with low computational complexity. Specifically, in the first stage, a tailored U-Net architecture is developed to learn the non-linear mapping from the received power pattern to coarse estimates of the angles and ranges of multi-path components. In particular, the inherent permutation ambiguity in multi-path parameter matching is effectively resolved by a permutation invariant training (PIT) strategy, while the unknown number of paths is estimated based on defined path existence logits. In the second stage, we further propose an efficient particle swarm optimization method to refine the angular and range parameters within a confined search region; in the meanwhile, a Gerchberg-Saxton algorithm is used to retrieve multi-path channel gains from the received power pattern. Last, numerical results demonstrate that the proposed hybrid design significantly outperforms various benchmarks in terms of parameter estimation accuracy and achievable rate, yet with low computational complexity.

Near-field Beam Training under Multi-path Channels: A Hybrid Learning-and-Optimization Approach

Abstract

For extremely large-scale arrays (XL-arrays), the discrete Fourier transform (DFT) codebook, conventionally used in the far-field, has recently been employed for near-field beam training. However, most existing methods rely on the line-of-sight (LoS) dominant channel assumption, which may suffer degraded communication performance when applied to the general multi-path scenario due to the more complex received signal power pattern at the user. To address this issue, we propose in this paper a new hybrid learning-and-optimization-based beam training method that first leverages deep learning (DL) to obtain coarse channel parameter estimates, and then refines them via a model-based optimization algorithm, hence achieving high-accuracy estimation with low computational complexity. Specifically, in the first stage, a tailored U-Net architecture is developed to learn the non-linear mapping from the received power pattern to coarse estimates of the angles and ranges of multi-path components. In particular, the inherent permutation ambiguity in multi-path parameter matching is effectively resolved by a permutation invariant training (PIT) strategy, while the unknown number of paths is estimated based on defined path existence logits. In the second stage, we further propose an efficient particle swarm optimization method to refine the angular and range parameters within a confined search region; in the meanwhile, a Gerchberg-Saxton algorithm is used to retrieve multi-path channel gains from the received power pattern. Last, numerical results demonstrate that the proposed hybrid design significantly outperforms various benchmarks in terms of parameter estimation accuracy and achievable rate, yet with low computational complexity.

Paper Structure

This paper contains 32 sections, 22 equations, 8 figures, 1 table.

Figures (8)

  • Figure 1: A narrow-band XL-array communication system.
  • Figure 2: Power pattern comparison. The left column shows the separated components defined by $\mathbf{h}^H_i = g_i \mathbf{b}^H(\theta_i,r_i)$ for $i=1,2$, while the right column shows the superposed channel $\mathbf{h}^H = \mathbf{h}^H_1 + \mathbf{h}^H_2$. Red text highlights parameter differences compared to Case 1, demonstrating the high sensitivity of signal superposition to slight parameter variations.
  • Figure 3: The framework of proposed hybrid learning-and-optimization method.
  • Figure 4: U-Net architecture and training methodology for coarse estimation.
  • Figure 5: Convergence behavior and runtime analysis of the proposed method.
  • ...and 3 more figures