Table of Contents
Fetching ...

HairOrbit: Multi-view Aware 3D Hair Modeling from Single Portraits

Leyang Jin, Yujian Zheng, Bingkui Tong, Yuda Qiu, Zhenyu Xie, Hao Li

Abstract

Reconstructing strand-level 3D hair from a single-view image is highly challenging, especially when preserving consistent and realistic attributes in unseen regions. Existing methods rely on limited frontal-view cues and small-scale/style-restricted synthetic data, often failing to produce satisfactory results in invisible regions. In this work, we propose a novel framework that leverages the strong 3D priors of video generation models to transform single-view hair reconstruction into a calibrated multi-view reconstruction task. To balance reconstruction quality and efficiency for the reformulated multi-view task, we further introduce a neural orientation extractor trained on sparse real-image annotations for better full-view orientation estimation. In addition, we design a two-stage strand-growing algorithm based on a hybrid implicit field to synthesize the 3D strand curves with fine-grained details at a relatively fast speed. Extensive experiments demonstrate that our method achieves state-of-the-art performance on single-view 3D hair strand reconstruction on a diverse range of hair portraits in both visible and invisible regions.

HairOrbit: Multi-view Aware 3D Hair Modeling from Single Portraits

Abstract

Reconstructing strand-level 3D hair from a single-view image is highly challenging, especially when preserving consistent and realistic attributes in unseen regions. Existing methods rely on limited frontal-view cues and small-scale/style-restricted synthetic data, often failing to produce satisfactory results in invisible regions. In this work, we propose a novel framework that leverages the strong 3D priors of video generation models to transform single-view hair reconstruction into a calibrated multi-view reconstruction task. To balance reconstruction quality and efficiency for the reformulated multi-view task, we further introduce a neural orientation extractor trained on sparse real-image annotations for better full-view orientation estimation. In addition, we design a two-stage strand-growing algorithm based on a hybrid implicit field to synthesize the 3D strand curves with fine-grained details at a relatively fast speed. Extensive experiments demonstrate that our method achieves state-of-the-art performance on single-view 3D hair strand reconstruction on a diverse range of hair portraits in both visible and invisible regions.

Paper Structure

This paper contains 29 sections, 4 equations, 6 figures, 3 tables.

Figures (6)

  • Figure 1: We propose a novel framework for strand-level single-view 3D hair reconstruction. Given a frontal-view portrait (a), we first synthesize corresponding calibrated multi-view images (b) on a camera orbit, then reconstruct multi-view aware 3D hair strands (c). Note that the left view in (c) is rendered with about 10k strands to better visualize the geometry, while the other 3 views are rendered with 100k.
  • Figure 2: Overview of HairOrbit. Given a single portrait, HairOrbit converts single-view 3D hair reconstruction into a multi-view task.
  • Figure 3: Qualitative comparisons of full-view orientation extraction. Results of HairStep have been converted to orientation maps.
  • Figure 4: Comparisons on multi-view generation.
  • Figure 5: Qualitative comparison on single-view 3D strands reconstruction. For every example, we show (a) the input image and the reconstructed 3D hair strands rendered in multiple views of (b) Ours, (c) Im2Haircut, (d) HairStep and (e) Difflocks.
  • ...and 1 more figures