Table of Contents
Fetching ...

Annotation-Free Detection of Drivable Areas and Curbs Leveraging LiDAR Point Cloud Maps

Fulong Ma, Daojie Peng, Jun Ma

Abstract

Drivable areas and curbs are critical traffic elements for autonomous driving, forming essential components of the vehicle visual perception system and ensuring driving safety. Deep neural networks (DNNs) have significantly improved perception performance for drivable area and curb detection, but most DNN-based methods rely on large manually labeled datasets, which are costly, time-consuming, and expert-dependent, limiting their real-world application. Thus, we developed an automated training data generation module. Our previous work generated training labels using single-frame LiDAR and RGB data, suffering from occlusion and distant point cloud sparsity. In this paper, we propose a novel map-based automatic data labeler (MADL) module, combining LiDAR mapping/localization with curb detection to automatically generate training data for both tasks. MADL avoids occlusion and point cloud sparsity issues via LiDAR mapping, creating accurate large-scale datasets for DNN training. In addition, we construct a data review agent to filter the data generated by the MADL module, eliminating low-quality samples. Experiments on the KITTI, KITTI-CARLA and 3D-Curb datasets show that MADL achieves impressive performance compared to manual labeling, and outperforms traditional and state-of-the-art self-supervised methods in robustness and accuracy.

Annotation-Free Detection of Drivable Areas and Curbs Leveraging LiDAR Point Cloud Maps

Abstract

Drivable areas and curbs are critical traffic elements for autonomous driving, forming essential components of the vehicle visual perception system and ensuring driving safety. Deep neural networks (DNNs) have significantly improved perception performance for drivable area and curb detection, but most DNN-based methods rely on large manually labeled datasets, which are costly, time-consuming, and expert-dependent, limiting their real-world application. Thus, we developed an automated training data generation module. Our previous work generated training labels using single-frame LiDAR and RGB data, suffering from occlusion and distant point cloud sparsity. In this paper, we propose a novel map-based automatic data labeler (MADL) module, combining LiDAR mapping/localization with curb detection to automatically generate training data for both tasks. MADL avoids occlusion and point cloud sparsity issues via LiDAR mapping, creating accurate large-scale datasets for DNN training. In addition, we construct a data review agent to filter the data generated by the MADL module, eliminating low-quality samples. Experiments on the KITTI, KITTI-CARLA and 3D-Curb datasets show that MADL achieves impressive performance compared to manual labeling, and outperforms traditional and state-of-the-art self-supervised methods in robustness and accuracy.

Paper Structure

This paper contains 18 sections, 5 equations, 4 figures, 6 tables.

Figures (4)

  • Figure 2: We use traditional methods to detect curbs from single-frame point cloud, and then leverage SLAM technology to construct curb maps and drivable area maps. These maps are used in subsequent steps to retrieve curb and drivable area information through localization within the map.
  • Figure 3: Given the point cloud input at a certain moment, the positional information at that moment is obtained through point cloud localization. Then, curb points and drivable area points within a specified range are retrieved from the map. The retrieved curb points can be directly used as training data for the curb detection task. The retrieved drivable area point cloud is projected onto the image plane using camera intrinsics, and through post-processing, training data for the drivable area detection task can be obtained.
  • Figure 4: The schematic diagram of our Data Review Agent. It takes as input three images from different modalities (a point cloud top-down view, an RGB image, and an altitude difference image), each containing curb detection information. Based on the quality of the curb detection, the agent then outputs whether the detection data from that frame should be retained.
  • Figure 5: The qualitative comparison results of our MADL with TDG mayr2018self / SSLG wang2019self and ADL ma2023self on the KITTI and KITTI-CARLA datasets. (a): Comparison on the KITTI dataset. (b): Comparison on the KITTI-CARLA dataset.