Table of Contents
Fetching ...

MPTF-Net: Multi-view Pyramid Transformer Fusion Network for LiDAR-based Place Recognition

Shuyuan Li, Zihang Wang, Xieyuanli Chen, Wenkai Zhu, Xiaoteng Fang, Peizhou Ni, Junhao Yang, Dong Kong

Abstract

LiDAR-based place recognition (LPR) is essential for global localization and loop-closure detection in large-scale SLAM systems. Existing methods typically construct global descriptors from Range Images or BEV representations for matching. BEV is widely adopted due to its explicit 2D spatial layout encoding and efficient retrieval. However, conventional BEV representations rely on simple statistical aggregation, which fails to capture fine-grained geometric structures, leading to performance degradation in complex or repetitive environments. To address this, we propose MPTF-Net, a novel multi-view multi-scale pyramid Transformer fusion network. Our core contribution is a multi-channel NDT-based BEV encoding that explicitly models local geometric complexity and intensity distributions via Normal Distribution Transform, providing a noise-resilient structural prior. To effectively integrate these features, we develop a customized pyramid Transformer module that captures cross-view interactive correlations between Range Image Views (RIV) and NDT-BEV at multiple spatial scales. Extensive experiments on the nuScenes, KITTI and NCLT datasets demonstrate that MPTF-Net achieves state-of-the-art performance, specifically attaining a Recall@1 of 96.31\% on the nuScenes Boston split while maintaining an inference latency of only 10.02 ms, making it highly suitable for real-time autonomous unmanned systems.

MPTF-Net: Multi-view Pyramid Transformer Fusion Network for LiDAR-based Place Recognition

Abstract

LiDAR-based place recognition (LPR) is essential for global localization and loop-closure detection in large-scale SLAM systems. Existing methods typically construct global descriptors from Range Images or BEV representations for matching. BEV is widely adopted due to its explicit 2D spatial layout encoding and efficient retrieval. However, conventional BEV representations rely on simple statistical aggregation, which fails to capture fine-grained geometric structures, leading to performance degradation in complex or repetitive environments. To address this, we propose MPTF-Net, a novel multi-view multi-scale pyramid Transformer fusion network. Our core contribution is a multi-channel NDT-based BEV encoding that explicitly models local geometric complexity and intensity distributions via Normal Distribution Transform, providing a noise-resilient structural prior. To effectively integrate these features, we develop a customized pyramid Transformer module that captures cross-view interactive correlations between Range Image Views (RIV) and NDT-BEV at multiple spatial scales. Extensive experiments on the nuScenes, KITTI and NCLT datasets demonstrate that MPTF-Net achieves state-of-the-art performance, specifically attaining a Recall@1 of 96.31\% on the nuScenes Boston split while maintaining an inference latency of only 10.02 ms, making it highly suitable for real-time autonomous unmanned systems.

Paper Structure

This paper contains 18 sections, 12 equations, 8 figures, 4 tables.

Figures (8)

  • Figure 1: Overview of the proposed MPTF-Net, a novel multi-view fusion-driven global descriptor extraction network for LiDAR-based place recognition.
  • Figure 2: Overall pipeline of MPTF-Net. The network jointly exploits RIV and BEV representations containing geometric and intensity cues. RIV and BEV branches adopt ResNet-based backbones, and the multi-scale Transformer fusion module captures cross-view interactions. Finally, the context-gating enhanced NetVLAD aggregates the fused features into discriminative, viewpoint-invariant global descriptors.
  • Figure 3: Block diagram of the BEV multi-feature encoding structure. After dividing the polar coordinate grids and selecting the point cloud clusters within the grids, NDT methods are utilized to compute geometric and intensity statistics.
  • Figure 4: Visualization of multimodal BEV features. These maps capture complementary structural and radiometric information.
  • Figure 5: Analysis of multi-Scale fusion strategies on recall performance.
  • ...and 3 more figures