Table of Contents
Fetching ...

SyncNoise: Geometrically Consistent Noise Prediction for Text-based 3D Scene Editing

Ruihuang Li, Liyi Chen, Zhengqiang Zhang, Varun Jampani, Vishal M. Patel, Lei Zhang

TL;DR

This work tackles the problem of producing text-driven edits for 3D scenes that are coherent across multiple views. It introduces SyncNoise, a geometry-guided framework that synchronizes noise predictions across views and enforces multi-view consistency on U-Net features, complemented by cross-view pixel-level projection from anchor views. Depth supervision from Structure-from-Motion and cycle-consistency constraints improve cross-view correspondences, enabling reliable edits that preserve structure while adding detailed textures. The method updates the underlying 3D representation (NeRF/Gaussian Splatting) from edited views and demonstrates superior instruction-following accuracy, visual quality, and efficiency compared with prior approaches. Overall, SyncNoise advances high-fidelity, view-consistent 3D editing by integrating geometry-aware correspondences, noise-level alignment, and pixel-level reprojection within a diffusion-based framework.

Abstract

Text-based 2D diffusion models have demonstrated impressive capabilities in image generation and editing. Meanwhile, the 2D diffusion models also exhibit substantial potentials for 3D editing tasks. However, how to achieve consistent edits across multiple viewpoints remains a challenge. While the iterative dataset update method is capable of achieving global consistency, it suffers from slow convergence and over-smoothed textures. We propose SyncNoise, a novel geometry-guided multi-view consistent noise editing approach for high-fidelity 3D scene editing. SyncNoise synchronously edits multiple views with 2D diffusion models while enforcing multi-view noise predictions to be geometrically consistent, which ensures global consistency in both semantic structure and low-frequency appearance. To further enhance local consistency in high-frequency details, we set a group of anchor views and propagate them to their neighboring frames through cross-view reprojection. To improve the reliability of multi-view correspondences, we introduce depth supervision during training to enhance the reconstruction of precise geometries. Our method achieves high-quality 3D editing results respecting the textual instructions, especially in scenes with complex textures, by enhancing geometric consistency at the noise and pixel levels.

SyncNoise: Geometrically Consistent Noise Prediction for Text-based 3D Scene Editing

TL;DR

This work tackles the problem of producing text-driven edits for 3D scenes that are coherent across multiple views. It introduces SyncNoise, a geometry-guided framework that synchronizes noise predictions across views and enforces multi-view consistency on U-Net features, complemented by cross-view pixel-level projection from anchor views. Depth supervision from Structure-from-Motion and cycle-consistency constraints improve cross-view correspondences, enabling reliable edits that preserve structure while adding detailed textures. The method updates the underlying 3D representation (NeRF/Gaussian Splatting) from edited views and demonstrates superior instruction-following accuracy, visual quality, and efficiency compared with prior approaches. Overall, SyncNoise advances high-fidelity, view-consistent 3D editing by integrating geometry-aware correspondences, noise-level alignment, and pixel-level reprojection within a diffusion-based framework.

Abstract

Text-based 2D diffusion models have demonstrated impressive capabilities in image generation and editing. Meanwhile, the 2D diffusion models also exhibit substantial potentials for 3D editing tasks. However, how to achieve consistent edits across multiple viewpoints remains a challenge. While the iterative dataset update method is capable of achieving global consistency, it suffers from slow convergence and over-smoothed textures. We propose SyncNoise, a novel geometry-guided multi-view consistent noise editing approach for high-fidelity 3D scene editing. SyncNoise synchronously edits multiple views with 2D diffusion models while enforcing multi-view noise predictions to be geometrically consistent, which ensures global consistency in both semantic structure and low-frequency appearance. To further enhance local consistency in high-frequency details, we set a group of anchor views and propagate them to their neighboring frames through cross-view reprojection. To improve the reliability of multi-view correspondences, we introduce depth supervision during training to enhance the reconstruction of precise geometries. Our method achieves high-quality 3D editing results respecting the textual instructions, especially in scenes with complex textures, by enhancing geometric consistency at the noise and pixel levels.

Paper Structure

This paper contains 18 sections, 9 equations, 11 figures, 1 table.

Figures (11)

  • Figure 1: Edited results by SyncNoise, which achieves high-quality and controllable editing that closely adheres to the instructions with minimal changes to irrelevant regions. SyncNoise attains geometrically consistent editing without compromising fine-grained textures.
  • Figure 2: Overview of our proposed SyncNoise for text-based 3D scene editing. We edit rendered multi-view images while enforcing geometrical consistency at the noise and pixel levels. First, we construct reliable correspondences based on precise 3D geometries. Then, we enforce multi-view noise consistency by aligning U-Net decoder features across views. We also use cross-view projection to maintain pixel-level consistency by propagating the anchor view to neighboring views. To minimize reprojection artifacts, we refine these views with a 2D diffusion model. Finally, we update the 3D scene based on the edited multi-view images.
  • Figure 3: The estimated depth on reference view $D_{ref}$ and the re-projected depth from reference view to novel view $D_{ref\to k}$. By imposing the depth supervision and two constraints, we can obtain reliable geometric correspondences across views. Orange denotes noisy points to be filtered.
  • Figure 4: Multi-view editing results obtained (a) without alignment, (b) by aligning latent features of different views, by enforcing consistencies on (c) skip features and (d) decoder features of U-Net. By enforcing the decoder features of noise predictor to be consistent, we can obtain multi-view consistent edits without introducing blurs. The text prompt is "make the man look like Tolkien Elf".
  • Figure 5: Multi-view editing results. Noise alignment is responsible for producing consistent edits in semantic structure and low-frequency appearance, while cross-view pixel reprojection ensures the consistency in high-frequency details. The text prompt is "Turn him into an Egyptian sculpture".
  • ...and 6 more figures