Table of Contents
Fetching ...

Drive-Through 3D Vehicle Exterior Reconstruction via Dynamic-Scene SfM and Distortion-Aware Gaussian Splatting

Nitin Kulkarni, Akhil Devarashetti, Charlie Cluss, Livio Forte, Philip Schneider, Chunming Qiao, Alina Vereshchaka

Abstract

High-fidelity 3D reconstruction of vehicle exteriors improves buyer confidence in online automotive marketplaces, but generating these models in cluttered dealership drive-throughs presents severe technical challenges. Unlike static-scene photogrammetry, this setting features a dynamic vehicle moving against heavily cluttered, static backgrounds. This problem is further compounded by wide-angle lens distortion, specular automotive paint, and non-rigid wheel rotations that violate classical epipolar constraints. We propose an end-to-end pipeline utilizing a two-pillar camera rig. First, we resolve dynamic-scene ambiguities by coupling SAM 3 for instance segmentation with motion-gating to cleanly isolate the moving vehicle, explicitly masking out non-rigid wheels to enforce strict epipolar geometry. Second, we extract robust correspondences directly on raw, distorted 4K imagery using the RoMa v2 learned matcher guided by semantic confidence masks. Third, these matches are integrated into a rig-aware SfM optimization that utilizes CAD-derived relative pose priors to eliminate scale drift. Finally, we use a distortion-aware 3D Gaussian Splatting framework (3DGUT) coupled with a stochastic Markov Chain Monte Carlo (MCMC) densification strategy to render reflective surfaces. Evaluations on 25 real-world vehicles across 10 dealerships demonstrate that our full pipeline achieves a PSNR of 28.66 dB, an SSIM of 0.89, and an LPIPS of 0.21 on held-out views, representing a 3.85 dB improvement over standard 3D-GS, delivering inspection-grade interactive 3D models without controlled studio infrastructure.

Drive-Through 3D Vehicle Exterior Reconstruction via Dynamic-Scene SfM and Distortion-Aware Gaussian Splatting

Abstract

High-fidelity 3D reconstruction of vehicle exteriors improves buyer confidence in online automotive marketplaces, but generating these models in cluttered dealership drive-throughs presents severe technical challenges. Unlike static-scene photogrammetry, this setting features a dynamic vehicle moving against heavily cluttered, static backgrounds. This problem is further compounded by wide-angle lens distortion, specular automotive paint, and non-rigid wheel rotations that violate classical epipolar constraints. We propose an end-to-end pipeline utilizing a two-pillar camera rig. First, we resolve dynamic-scene ambiguities by coupling SAM 3 for instance segmentation with motion-gating to cleanly isolate the moving vehicle, explicitly masking out non-rigid wheels to enforce strict epipolar geometry. Second, we extract robust correspondences directly on raw, distorted 4K imagery using the RoMa v2 learned matcher guided by semantic confidence masks. Third, these matches are integrated into a rig-aware SfM optimization that utilizes CAD-derived relative pose priors to eliminate scale drift. Finally, we use a distortion-aware 3D Gaussian Splatting framework (3DGUT) coupled with a stochastic Markov Chain Monte Carlo (MCMC) densification strategy to render reflective surfaces. Evaluations on 25 real-world vehicles across 10 dealerships demonstrate that our full pipeline achieves a PSNR of 28.66 dB, an SSIM of 0.89, and an LPIPS of 0.21 on held-out views, representing a 3.85 dB improvement over standard 3D-GS, delivering inspection-grade interactive 3D models without controlled studio infrastructure.

Paper Structure

This paper contains 26 sections, 17 equations, 7 figures, 2 tables, 1 algorithm.

Figures (7)

  • Figure 1: Overview of the proposed end-to-end 3D exterior reconstruction pipeline.(1) Camera Calibration: One-time estimation of fisheye intrinsics and CAD-derived relative rig extrinsics. (2) Data Acquisition: Drive-through video capture of the moving vehicle using a 14-camera dual-pillar rig. (3) Rigid Vehicle Isolation: SAM 3 and motion-gating isolate the dynamic vehicle while explicitly subtracting non-rigid wheel rotations to satisfy epipolar constraints. (4) Feature Matching: The RoMa v2 learned matcher extracts robust dense correspondences directly on the raw, distorted frames. (5) Structure-from-Motion (SfM): Rig-aware global bundle adjustment geometrically verifies matches and estimates the camera poses using static structural hardware priors. (6) Gaussian Splatting: A distortion-aware 3D-GS architecture (3DGUT) renders the final photorealistic, interactive 3D model.
  • Figure 2: Data collection in a commercial dealership. The 14-camera dual-pillar rig captures 4K video as the vehicle traverses the highly cluttered, unconstrained capture volume. To maximize coverage in a single pass, cameras are mounted on both the driver's and passenger's sides across three vertical tiers (16, 55, and 97 inches), featuring front-, side-, and rear-facing yaw orientations.
  • Figure 3: Rigid vehicle isolation and tracking. Top: representative frames. Bottom: corresponding rigid-body masks produced by motion-gated SAM 3 tracking. The pipeline tracks the dynamic vehicle while suppressing background dealership clutter and explicitly masking out the non-rigid rotating wheels.
  • Figure 4: RoMA v2 correspondence visualization on distorted images after rigid-body masking. The dense green correspondence lines confirm that the learned matcher successfully bridges extreme viewpoint changes on the raw 4K frames, overcoming wide-FOV fisheye distortion.
  • Figure 5: Representative successful sparse point cloud produced by the full pipeline, visualized from multiple viewpoints for the same vehicle. The sparse point cloud forms a coherent vehicle shell with limited background structure, providing a stable geometric scaffold for downstream Gaussian Splatting.
  • ...and 2 more figures