Table of Contents
Fetching ...

Dynamic Gaussian Marbles for Novel View Synthesis of Casual Monocular Videos

Colton Stearns, Adam Harley, Mikaela Uy, Florian Dubost, Federico Tombari, Gordon Wetzstein, Leonidas Guibas

TL;DR

Dynamic Gaussian Marbles address the challenge of reconstructing and rendering dynamic scenes from casual monocular videos by constraining a Gaussian-based representation. The method introduces isotropic Gaussian marbles with trajectories, a divide-and-conquer optimization strategy, and image-space and geometry-space priors, including a tracking loss and 3D Chamfer alignment. Empirically, Gaussian Marbles outperform prior Gaussian baselines in monocular settings and are competitive with NeRF approaches, while offering advantages in rendering speed, tracking, and editability. This work advances practical monocular dynamic scene synthesis, enabling robust novel-view rendering and editing with efficient computation on standard hardware.

Abstract

Gaussian splatting has become a popular representation for novel-view synthesis, exhibiting clear strengths in efficiency, photometric quality, and compositional edibility. Following its success, many works have extended Gaussians to 4D, showing that dynamic Gaussians maintain these benefits while also tracking scene geometry far better than alternative representations. Yet, these methods assume dense multi-view videos as supervision. In this work, we are interested in extending the capability of Gaussian scene representations to casually captured monocular videos. We show that existing 4D Gaussian methods dramatically fail in this setup because the monocular setting is underconstrained. Building off this finding, we propose a method we call Dynamic Gaussian Marbles, which consist of three core modifications that target the difficulties of the monocular setting. First, we use isotropic Gaussian "marbles'', reducing the degrees of freedom of each Gaussian. Second, we employ a hierarchical divide and-conquer learning strategy to efficiently guide the optimization towards solutions with globally coherent motion. Finally, we add image-level and geometry-level priors into the optimization, including a tracking loss that takes advantage of recent progress in point tracking. By constraining the optimization, Dynamic Gaussian Marbles learns Gaussian trajectories that enable novel-view rendering and accurately capture the 3D motion of the scene elements. We evaluate on the Nvidia Dynamic Scenes dataset and the DyCheck iPhone dataset, and show that Gaussian Marbles significantly outperforms other Gaussian baselines in quality, and is on-par with non-Gaussian representations, all while maintaining the efficiency, compositionality, editability, and tracking benefits of Gaussians. Our project page can be found here https://geometry.stanford.edu/projects/dynamic-gaussian-marbles.github.io/.

Dynamic Gaussian Marbles for Novel View Synthesis of Casual Monocular Videos

TL;DR

Dynamic Gaussian Marbles address the challenge of reconstructing and rendering dynamic scenes from casual monocular videos by constraining a Gaussian-based representation. The method introduces isotropic Gaussian marbles with trajectories, a divide-and-conquer optimization strategy, and image-space and geometry-space priors, including a tracking loss and 3D Chamfer alignment. Empirically, Gaussian Marbles outperform prior Gaussian baselines in monocular settings and are competitive with NeRF approaches, while offering advantages in rendering speed, tracking, and editability. This work advances practical monocular dynamic scene synthesis, enabling robust novel-view rendering and editing with efficient computation on standard hardware.

Abstract

Gaussian splatting has become a popular representation for novel-view synthesis, exhibiting clear strengths in efficiency, photometric quality, and compositional edibility. Following its success, many works have extended Gaussians to 4D, showing that dynamic Gaussians maintain these benefits while also tracking scene geometry far better than alternative representations. Yet, these methods assume dense multi-view videos as supervision. In this work, we are interested in extending the capability of Gaussian scene representations to casually captured monocular videos. We show that existing 4D Gaussian methods dramatically fail in this setup because the monocular setting is underconstrained. Building off this finding, we propose a method we call Dynamic Gaussian Marbles, which consist of three core modifications that target the difficulties of the monocular setting. First, we use isotropic Gaussian "marbles'', reducing the degrees of freedom of each Gaussian. Second, we employ a hierarchical divide and-conquer learning strategy to efficiently guide the optimization towards solutions with globally coherent motion. Finally, we add image-level and geometry-level priors into the optimization, including a tracking loss that takes advantage of recent progress in point tracking. By constraining the optimization, Dynamic Gaussian Marbles learns Gaussian trajectories that enable novel-view rendering and accurately capture the 3D motion of the scene elements. We evaluate on the Nvidia Dynamic Scenes dataset and the DyCheck iPhone dataset, and show that Gaussian Marbles significantly outperforms other Gaussian baselines in quality, and is on-par with non-Gaussian representations, all while maintaining the efficiency, compositionality, editability, and tracking benefits of Gaussians. Our project page can be found here https://geometry.stanford.edu/projects/dynamic-gaussian-marbles.github.io/.

Paper Structure

This paper contains 48 sections, 6 equations, 7 figures, 5 tables.

Figures (7)

  • Figure 1: Gaussian Marbles Overview. At training time (left), we take as input a video and optimize a Gaussian-based reconstruction of the data. We begin by initializing a set of Gaussians for each frame. Then, we employ a bottom-up divide-and-conquer strategy to merge sets of Gaussians, which iteratively attributes longer motion trajectories to Gaussians. Motion estimation is achieved by optimizing a rendering loss (i.e., color reconstruction), a tracking loss (i.e., Gaussians should move similarly to point tracks), and geometry-based losses (e.g., local rigidity). After training (right), each Gaussian has a multi-frame trajectory, and we can render into a timestep using the set of Gaussian trajectories that span the timestep.
  • Figure 2: We train anisotropic Gaussians and our Gaussian Marbles for 100K iterations on a single monocular RGBD image. While the training view reconstruction is perfect for both, anistropic Gaussians lead to undesirable artifacts in novel views, whereas Gaussian marbles generalize well.
  • Figure 3: Our divide and conquer learning algorithm iteratively estimates motion between pairs of Gaussian sets, merges the sets, and performs a global adjustment on the Gaussian marbles within the merged sets.
  • Figure 4: We visualize novel view synthesis Gaussian Marbles and baselines on various scenes of the DyCheck iPhone dataset (setting without camera pose).
  • Figure 5: We visualize dense point tracking of Gaussian Marbles on two scenes from the DyCheck IPhone dataset (in the setting where camera pose is withheld).
  • ...and 2 more figures