Table of Contents
Fetching ...

DipGuava: Disentangling Personalized Gaussian Features for 3D Head Avatars from Monocular Video

Jeonghaeng Lee, Seok Keun Choi, Zhixuan Li, Weisi Lin, Sanghoon Lee

Abstract

While recent 3D head avatar creation methods attempt to animate facial dynamics, they often fail to capture personalized details, limiting realism and expressiveness. To fill this gap, we present DipGuava (Disentangled and Personalized Gaussian UV Avatar), a novel 3D Gaussian head avatar creation method that successfully generates avatars with personalized attributes from monocular video. DipGuava is the first method to explicitly disentangle facial appearance into two complementary components, trained in a structured two-stage pipeline that significantly reduces learning ambiguity and enhances reconstruction fidelity. In the first stage, we learn a stable geometry-driven base appearance that captures global facial structure and coarse expression-dependent variations. In the second stage, the personalized residual details not captured in the first stage are predicted, including high-frequency components and nonlinearly varying features such as wrinkles and subtle skin deformations. These components are fused via dynamic appearance fusion that integrates residual details after deformation, ensuring spatial and semantic alignment. This disentangled design enables DipGuava to generate photorealistic, identity-preserving avatars, consistently outperforming prior methods in both visual quality and quantitativeperformance, as demonstrated in extensive experiments.

DipGuava: Disentangling Personalized Gaussian Features for 3D Head Avatars from Monocular Video

Abstract

While recent 3D head avatar creation methods attempt to animate facial dynamics, they often fail to capture personalized details, limiting realism and expressiveness. To fill this gap, we present DipGuava (Disentangled and Personalized Gaussian UV Avatar), a novel 3D Gaussian head avatar creation method that successfully generates avatars with personalized attributes from monocular video. DipGuava is the first method to explicitly disentangle facial appearance into two complementary components, trained in a structured two-stage pipeline that significantly reduces learning ambiguity and enhances reconstruction fidelity. In the first stage, we learn a stable geometry-driven base appearance that captures global facial structure and coarse expression-dependent variations. In the second stage, the personalized residual details not captured in the first stage are predicted, including high-frequency components and nonlinearly varying features such as wrinkles and subtle skin deformations. These components are fused via dynamic appearance fusion that integrates residual details after deformation, ensuring spatial and semantic alignment. This disentangled design enables DipGuava to generate photorealistic, identity-preserving avatars, consistently outperforming prior methods in both visual quality and quantitativeperformance, as demonstrated in extensive experiments.

Paper Structure

This paper contains 31 sections, 14 equations, 9 figures, 2 tables.

Figures (9)

  • Figure 1: Conceptual comparison with prior approaches. (a) Optimization-based methods fail to capture residual details. (b) Entangled models suffer from learning ambiguity. (c) Our disentangled design separately models base and residual features for faithful reconstruction.
  • Figure 2: Overview of DipGuava. Stage 1 optimizes a geometry-driven base appearance (overall color and opacity from mesh surface normals). Stage 2 predicts residual features to capture facial details beyond the base appearance. Dynamic appearance fusion combines these residuals with the base appearance and geometric deformations.
  • Figure 3: Dynamic appearance fusion. To ensure alignment, personalized residual appearance sampled from UV space is combined with the base appearance after the geometric deformation.
  • Figure 4: Qualitative comparison in self-driven animation. Our method generates outperforming results with details such as wrinkles, eye blinks, and lip movement.
  • Figure 5: Qualitative comparison in cross-id reenactment. The proposed method preserves both facial structure and appearance with high fidelity, while accurately following subtle expressions in the driving motion.
  • ...and 4 more figures