Table of Contents
Fetching ...

UNIC: Neural Garment Deformation Field for Real-time Clothed Character Animation

Chengfeng Zhao, Junbo Qi, Yulou Liu, Zhiyang Dou, Minchen Li, Taku Komura, Ziwei Liu, Wenping Wang, Yuan Liu

Abstract

Simulating physically realistic garment deformations is an essential task for virtual immersive experience, which is often achieved by physics simulation methods. However, these methods are typically time-consuming, computationally demanding, and require costly hardware, which is not suitable for real-time applications. Recent learning-based methods tried to resolve this problem by training graph neural networks to learn the garment deformation on vertices, which, however, fail to capture the intricate deformation of complex garment meshes with complex topologies. In this paper, we introduce a novel neural deformation field-based method, named UNIC, to animate the garments of an avatar in real time, given the motion sequences. Our key idea is to learn the instance-specific neural deformation field to animate the garment meshes. Such an instance-specific learning scheme does not require UNIC to generalize to new garments but only to new motion sequences, which greatly reduces the difficulty in training and improves the deformation quality. Moreover, neural deformation fields map the 3D points to their deformation offsets, which not only avoids handling topologies of the complex garments but also injects a natural smoothness constraint in the deformation learning. Extensive experiments have been conducted on various kinds of garment meshes to demonstrate the effectiveness and efficiency of UNIC over baseline methods, making it potentially practical and useful in real-world interactive applications like video games.

UNIC: Neural Garment Deformation Field for Real-time Clothed Character Animation

Abstract

Simulating physically realistic garment deformations is an essential task for virtual immersive experience, which is often achieved by physics simulation methods. However, these methods are typically time-consuming, computationally demanding, and require costly hardware, which is not suitable for real-time applications. Recent learning-based methods tried to resolve this problem by training graph neural networks to learn the garment deformation on vertices, which, however, fail to capture the intricate deformation of complex garment meshes with complex topologies. In this paper, we introduce a novel neural deformation field-based method, named UNIC, to animate the garments of an avatar in real time, given the motion sequences. Our key idea is to learn the instance-specific neural deformation field to animate the garment meshes. Such an instance-specific learning scheme does not require UNIC to generalize to new garments but only to new motion sequences, which greatly reduces the difficulty in training and improves the deformation quality. Moreover, neural deformation fields map the 3D points to their deformation offsets, which not only avoids handling topologies of the complex garments but also injects a natural smoothness constraint in the deformation learning. Extensive experiments have been conducted on various kinds of garment meshes to demonstrate the effectiveness and efficiency of UNIC over baseline methods, making it potentially practical and useful in real-world interactive applications like video games.

Paper Structure

This paper contains 29 sections, 12 equations, 10 figures, 3 tables.

Figures (10)

  • Figure 1: UNIC enables real-time and high-quality physically realistic deformation of complex garments with arbitrary topologies to follow arbitrary unseen character motions, which benefits real-time character animation applications like video games.
  • Figure 2: Quality and efficiency comparison to the "gold standard". Given garments of different topology complexity and geometry density (top row), UNIC consistently outperforms GPU-accelerated professional software md in efficiency, while also achieving comparable simulation quality (middle and bottom row). The inference speed is measured on a single NVIDIA RTX 3090 GPU.
  • Figure 3: Overview of our training pipeline. UNIC learns an instance-specific neural deformation field that deforms arbitrarily complex garments to follow unseen character poses in real time. We first encode the character poses of two consecutive frames into a compact latent space. Then, we categorically sample a latent vector from the learned space and concatenate it with garment vertex coordinates. After that, we feed the concatenation into an MLP-based deformation decoder to predict deformation offsets for all vertices. Finally, post-process intersection handling is applied to avoid the intersection between the avatar mesh and the deformed garment mesh.
  • Figure 4: Different Garment-motion representations. (a) Explicit representation determined by proximal skinning weights on top of parametric skeleton model SMPL2015. (b) Our representation combining garment vertex coordinates with motion latent features.
  • Figure 5: Plug-and-play intersection handling module. (a) Intersections detected by the tangent plane of the nearest character body point. (b) Dragging intersected vertices out with a relaxation buffer.
  • ...and 5 more figures