Table of Contents
Fetching ...

SDesc3D: Towards Layout-Aware 3D Indoor Scene Generation from Short Descriptions

Jie Feng, Jiawei Shen, Junjia Huang, Junpeng Zhang, Mingtao Feng, Weisheng Dong, Guanbin Li

Abstract

3D indoor scene generation conditioned on short textual descriptions provides a promising avenue for interactive 3D environment construction without the need for labor-intensive layout specification. Despite recent progress in text-conditioned 3D scene generation, existing works suffer from poor physical plausibility and insufficient detail richness in such semantic condensation cases, largely due to their reliance on explicit semantic cues about compositional objects and their spatial relationships. This limitation highlights the need for enhanced 3D reasoning capabilities, particularly in terms of prior integration and spatial anchoring. Motivated by this, we propose SDesc3D, a short-text conditioned 3D indoor scene generation framework, that leverages multi-view structural priors and regional functionality implications to enable 3D layout reasoning under sparse textual guidance. Specifically, we introduce a Multi-view scene prior augmentation that enriches underspecified textual inputs with aggregated multi-view structural knowledge, shifting from inaccessible semantic relation cues to multi-view relational prior aggregation. Building on this, we design a Functionality-aware layout grounding, employing regional functionality grounding for implicit spatial anchors and conducting hierarchical layout reasoning to enhance scene organization and semantic plausibility. Furthermore, an Iterative reflection-rectification scheme is employed for progressive structural plausibility refinement via self-rectification. Extensive experiments show that our method outperforms existing approaches on short-text conditioned 3D indoor scene generation. Code will be publicly available.

SDesc3D: Towards Layout-Aware 3D Indoor Scene Generation from Short Descriptions

Abstract

3D indoor scene generation conditioned on short textual descriptions provides a promising avenue for interactive 3D environment construction without the need for labor-intensive layout specification. Despite recent progress in text-conditioned 3D scene generation, existing works suffer from poor physical plausibility and insufficient detail richness in such semantic condensation cases, largely due to their reliance on explicit semantic cues about compositional objects and their spatial relationships. This limitation highlights the need for enhanced 3D reasoning capabilities, particularly in terms of prior integration and spatial anchoring. Motivated by this, we propose SDesc3D, a short-text conditioned 3D indoor scene generation framework, that leverages multi-view structural priors and regional functionality implications to enable 3D layout reasoning under sparse textual guidance. Specifically, we introduce a Multi-view scene prior augmentation that enriches underspecified textual inputs with aggregated multi-view structural knowledge, shifting from inaccessible semantic relation cues to multi-view relational prior aggregation. Building on this, we design a Functionality-aware layout grounding, employing regional functionality grounding for implicit spatial anchors and conducting hierarchical layout reasoning to enhance scene organization and semantic plausibility. Furthermore, an Iterative reflection-rectification scheme is employed for progressive structural plausibility refinement via self-rectification. Extensive experiments show that our method outperforms existing approaches on short-text conditioned 3D indoor scene generation. Code will be publicly available.

Paper Structure

This paper contains 17 sections, 12 equations, 4 figures, 5 tables.

Figures (4)

  • Figure 1: Overview of our SDesc3D Framework. Given a short user descriptions, SDesc3D first performs Multi-view Scene Prior Augmentation to retrieve scene priors for sparse semantic completion. Next, Functionality-aware Layout Grounding leverages regional functionality implications to reason about a hierarchical layout in a coarse-fine manner. Finally, Iterative reflection-rectification is adopted to iteratively suppress residual physical and structural errors.
  • Figure 2: Qualitive comparison on the scenes generated on five different short descriptions. Our method achieves better overall scene quality than the compared approaches in terms of physical plausibility and detail richness.
  • Figure 3: Qualitative comparison of HSM, Reason3D, and our method under the long-text setting.
  • Figure 4: Examples of scene editing results of object addition, deletion and relocation by SDesc3D. Without additional treatments, SDesc3D handles these editing actions with plausible results.