Table of Contents
Fetching ...

Robust Multi-Robot Global Localization with Unknown Initial Pose based on Neighbor Constraints

Yaojie Zhang, Haowen Luo, Weijun Wang, Wei Feng

TL;DR

This work tackles multi-robot global localization with unknown initial poses by leveraging semantic graphs to bridge viewpoint gaps. It builds 3D semantic graphs from semantic, depth, and pose data, and uses graph descriptors for initial matching. A novel neighbor-constraints-based pre-rejection reduces outliers before RANSAC, followed by ICP-based pose estimation to recover the relative pose. Experiments on AirSim, SYNTHIA, and KITTI demonstrate improved robustness to low map overlap, enhanced accuracy, and reduced computation time compared to prior graph-based methods.

Abstract

Multi-robot global localization (MR-GL) with unknown initial positions in a large scale environment is a challenging task. The key point is the data association between different robots' viewpoints. It also makes traditional Appearance-based localization methods unusable. Recently, researchers have utilized the object's semantic invariance to generate a semantic graph to address this issue. However, previous works lack robustness and are sensitive to overlap rate of maps, resulting in unpredictable performance in real-world environments. In this paper, we propose a data association algorithm based on neighbor constraints to improve the robustness of the system. We demonstrate the effectiveness of our method on three different datasets, indicating a significant improvement in robustness compared to previous works.

Robust Multi-Robot Global Localization with Unknown Initial Pose based on Neighbor Constraints

TL;DR

This work tackles multi-robot global localization with unknown initial poses by leveraging semantic graphs to bridge viewpoint gaps. It builds 3D semantic graphs from semantic, depth, and pose data, and uses graph descriptors for initial matching. A novel neighbor-constraints-based pre-rejection reduces outliers before RANSAC, followed by ICP-based pose estimation to recover the relative pose. Experiments on AirSim, SYNTHIA, and KITTI demonstrate improved robustness to low map overlap, enhanced accuracy, and reduced computation time compared to prior graph-based methods.

Abstract

Multi-robot global localization (MR-GL) with unknown initial positions in a large scale environment is a challenging task. The key point is the data association between different robots' viewpoints. It also makes traditional Appearance-based localization methods unusable. Recently, researchers have utilized the object's semantic invariance to generate a semantic graph to address this issue. However, previous works lack robustness and are sensitive to overlap rate of maps, resulting in unpredictable performance in real-world environments. In this paper, we propose a data association algorithm based on neighbor constraints to improve the robustness of the system. We demonstrate the effectiveness of our method on three different datasets, indicating a significant improvement in robustness compared to previous works.

Paper Structure

This paper contains 20 sections, 7 equations, 7 figures, 2 tables, 1 algorithm.

Figures (7)

  • Figure 1: An example of a challenging task. In this task, each graph has about 1300 nodes and exist lots of repetitive scenarios. The figure shows the worst localization condition of previous method as well as ours. With the previous method, the result has 9% probability failure rate (Translation error over 20m).
  • Figure 2: Our approach overall architecture. The 3D semantic graph for each robot is first built from semantic frames, depth frames, and poses. Then, the descriptor of each node is extracted to achieve approximate graph matching. Next, the two graphs are matched by comparing the descriptors across the graphs. To be more specific, ours method utilize the neighbor constraints to make a preliminary rejection. Finally, two robots can achieve global localization using the corrected matched correspondences.
  • Figure 3: Sample images from three datasets used in the experiments. (top) Semantic segmentation, (bottom) Depth image. SYNTHIA with perfect semantic segmentation and depth image, Airsim with perfect semantic segmentation and depth image, KITTI with LabelRelatx zhu2019improving semantic segmentation and BTS lee2019big depth image.
  • Figure 4: Detailed translation error comparisons. The blue points/bars are previous approach's results and orange is for ours.(a) The RANSAC threshold represents the accept deviation for RANSAC. Translation error represents in Euclidean distance(m). Each point is an average value of 100 times results. (b) Distributions on RANSAC threshold 5m.
  • Figure 5: PR curve about predicted matches. The PR curve indicates the ability of finding correct matches. The precision=1 means all predicted matches are good matches. The recall=1 means all good matches are found.
  • ...and 2 more figures