Table of Contents
Fetching ...

Benchmarking Multi-View BEV Object Detection with Mixed Pinhole and Fisheye Cameras

Xiangzhong Liu, Hao Shen

Abstract

Modern autonomous driving systems increasingly rely on mixed camera configurations with pinhole and fisheye cameras for full view perception. However, Bird's-Eye View (BEV) 3D object detection models are predominantly designed for pinhole cameras, leading to performance degradation under fisheye distortion. To bridge this gap, we introduce a multi-view BEV detection benchmark with mixed cameras by converting KITTI-360 into nuScenes format. Our study encompasses three adaptations: rectification for zero-shot evaluation and fine-tuning of nuScenes-trained models, distortion-aware view transformation modules (VTMs) via the MEI camera model, and polar coordinate representations to better align with radial distortion. We systematically evaluate three representative BEV architectures, BEVFormer, BEVDet and PETR, across these strategies. We demonstrate that projection-free architectures are inherently more robust and effective against fisheye distortion than other VTMs. This work establishes the first real-data 3D detection benchmark with fisheye and pinhole images and provides systematic adaptation and practical guidelines for designing robust and cost-effective 3D perception systems. The code is available at https://github.com/CesarLiu/FishBEVOD.git.

Benchmarking Multi-View BEV Object Detection with Mixed Pinhole and Fisheye Cameras

Abstract

Modern autonomous driving systems increasingly rely on mixed camera configurations with pinhole and fisheye cameras for full view perception. However, Bird's-Eye View (BEV) 3D object detection models are predominantly designed for pinhole cameras, leading to performance degradation under fisheye distortion. To bridge this gap, we introduce a multi-view BEV detection benchmark with mixed cameras by converting KITTI-360 into nuScenes format. Our study encompasses three adaptations: rectification for zero-shot evaluation and fine-tuning of nuScenes-trained models, distortion-aware view transformation modules (VTMs) via the MEI camera model, and polar coordinate representations to better align with radial distortion. We systematically evaluate three representative BEV architectures, BEVFormer, BEVDet and PETR, across these strategies. We demonstrate that projection-free architectures are inherently more robust and effective against fisheye distortion than other VTMs. This work establishes the first real-data 3D detection benchmark with fisheye and pinhole images and provides systematic adaptation and practical guidelines for designing robust and cost-effective 3D perception systems. The code is available at https://github.com/CesarLiu/FishBEVOD.git.

Paper Structure

This paper contains 30 sections, 6 equations, 5 figures, 3 tables.

Figures (5)

  • Figure 2: Qualitative detection results on KITTI-360 mixed camera configuration. 3D bounding box and point cloud rendering overlaid on pinhole and fisheye images. The visualization is created with a customized nuScenes devkit. Our method successfully handles the severe radial distortion in fisheye cameras while maintaining consistent detection accuracy across different camera types.
  • Figure 3: Camera configuration and full field-of-view coverage comparison. Left: Original KITTI-360 with stereo cameras and fisheye cameras. Middle: Rectified KITTI-360 with 6 pinhole cameras. Right: NuScenes configuration with 6 pinhole cameras.
  • Figure 4: Overview of distortion-aware BEV 3D object detection framework. Multi-view images (pinhole+fisheye) are processed by a shared backbone encoder, then fed into three distortion-aware view transformation modules via MEI camera model. The resulting BEV features can be represented in either Cartesian or polar coordinates to better align with fisheye geometry. A detection head(Transformer or CNN) processes BEV features to produce final 3D object detection outputs.
  • Figure 5: Class distribution in KITTI-360 dataset (log scale). The annotation distribution exhibits significant class imbalance, heavily skewed toward static infrastructure objects, while dynamic objects relevant to autonomous driving other than car represent a smaller portion. Blue bars indicate training samples, orange bars for validation, with consistent ratios maintained across splits.
  • Figure 6: Model robustness under camera failure scenarios. FL, FB and SL denote the front-left, front-both and side-left.