Table of Contents
Fetching ...

Unblur-SLAM: Dense Neural SLAM for Blurry Inputs

Qi Zhang, Denis Rozumny, Francesco Girlanda, Sezer Karaoglu, Marc Pollefeys, Theo Gevers, Martin R. Oswald

Abstract

We propose Unblur-SLAM, a novel RGB SLAM pipeline for sharp 3D reconstruction from blurred image inputs. In contrast to previous work, our approach is able to handle different types of blur and demonstrates state-of-the-art performance in the presence of both motion blur and defocus blur. Moreover, we adjust the computation effort with the amount of blur in the input image. As a first stage, our method uses a feed-forward image deblurring model for which we propose a suitable training scheme that can improve both tracking and mapping modules. Frames that are successfully deblurred by the feed-forward network obtain refined poses and depth through local-global multi-view optimization and loop closure. Frames that fail the first stage deblurring are directly modeled through the global 3DGS representation and an additional blur network to model multiple blurred sub-frames and simulate the blur formation process in 3D space, thereby learning sharp details and refined sub-frame poses. Experiments on several real-world datasets demonstrate consistent improvements in both pose estimation and sharp reconstruction results of geometry and texture.

Unblur-SLAM: Dense Neural SLAM for Blurry Inputs

Abstract

We propose Unblur-SLAM, a novel RGB SLAM pipeline for sharp 3D reconstruction from blurred image inputs. In contrast to previous work, our approach is able to handle different types of blur and demonstrates state-of-the-art performance in the presence of both motion blur and defocus blur. Moreover, we adjust the computation effort with the amount of blur in the input image. As a first stage, our method uses a feed-forward image deblurring model for which we propose a suitable training scheme that can improve both tracking and mapping modules. Frames that are successfully deblurred by the feed-forward network obtain refined poses and depth through local-global multi-view optimization and loop closure. Frames that fail the first stage deblurring are directly modeled through the global 3DGS representation and an additional blur network to model multiple blurred sub-frames and simulate the blur formation process in 3D space, thereby learning sharp details and refined sub-frame poses. Experiments on several real-world datasets demonstrate consistent improvements in both pose estimation and sharp reconstruction results of geometry and texture.

Paper Structure

This paper contains 23 sections, 17 equations, 9 figures, 8 tables, 1 algorithm.

Figures (9)

  • Figure 1: Unblur-SLAM results. Our SLAM-approach integrates both a feed-forward deblurring and rerendering-based test-time refinement effectively. The latter one estimates a local point spread function, which enables our method to handle multiple sources of blur, demonstrating excellent performance for both motion and defocus blur. While previous blur-aware SLAM approaches typically assume all input frames to be blurry and are thus significantly slower than regular SLAM methods, Unblur-SLAM detects the amount of blur in the input frame and skips the costly refinement for sharp frames.
  • Figure 2: Method overview. Unblur-SLAM robustly handles varying blur by adaptively categorizing images into sharp, blurry, and heavily blurred levels (shown in different red shades). Since typically only a subset of frames is blurred, this improves both blur handling and average runtime. Both tracking and mapping modules optionally leverage the deblurring network. The red mapping module optimizes the 3DGS reconstruction using sliding-window (Eq. \ref{['eq:bundle_adjustment']}) and global (Eq. \ref{['eq:global_bundle_adjustment']}) losses, incorporating depth (Eq. \ref{['eq:update_gaussian']}) and pose from the blue tracking module.
  • Figure 3: Deblurring performance of our method when compared to other state-of-the-art offline methods.
  • Figure 4: Trajectory comparison with Droid-SLAM on the indoor MCD dataset 10016760.
  • Figure 5: Qualitative comparison with MBA-SLAMwang2025mba on the TUM dataset sturm12iros. Our method yields sharper reconstruction results in many scene parts. The qualitative experimental results for the shown fr1_desk sequence were obtained through communication with the MBA-SLAM authors.
  • ...and 4 more figures