Table of Contents
Fetching ...

Visually-grounded Humanoid Agents

Hang Ye, Xiaoxuan Ma, Fan Lu, Wayne Wu, Kwan-Yee Lin, Yizhou Wang

Abstract

Digital human generation has been studied for decades and supports a wide range of real-world applications. However, most existing systems are passively animated, relying on privileged state or scripted control, which limits scalability to novel environments. We instead ask: how can digital humans actively behave using only visual observations and specified goals in novel scenes? Achieving this would enable populating any 3D environments with digital humans at scale that exhibit spontaneous, natural, goal-directed behaviors. To this end, we introduce Visually-grounded Humanoid Agents, a coupled two-layer (world-agent) paradigm that replicates humans at multiple levels: they look, perceive, reason, and behave like real people in real-world 3D scenes. The World Layer reconstructs semantically rich 3D Gaussian scenes from real-world videos via an occlusion-aware pipeline and accommodates animatable Gaussian-based human avatars. The Agent Layer transforms these avatars into autonomous humanoid agents, equipping them with first-person RGB-D perception and enabling them to perform accurate, embodied planning with spatial awareness and iterative reasoning, which is then executed at the low level as full-body actions to drive their behaviors in the scene. We further introduce a benchmark to evaluate humanoid-scene interaction in diverse reconstructed environments. Experiments show our agents achieve robust autonomous behavior, yielding higher task success rates and fewer collisions than ablations and state-of-the-art planning methods. This work enables active digital human population and advances human-centric embodied AI. Data, code, and models will be open-sourced.

Visually-grounded Humanoid Agents

Abstract

Digital human generation has been studied for decades and supports a wide range of real-world applications. However, most existing systems are passively animated, relying on privileged state or scripted control, which limits scalability to novel environments. We instead ask: how can digital humans actively behave using only visual observations and specified goals in novel scenes? Achieving this would enable populating any 3D environments with digital humans at scale that exhibit spontaneous, natural, goal-directed behaviors. To this end, we introduce Visually-grounded Humanoid Agents, a coupled two-layer (world-agent) paradigm that replicates humans at multiple levels: they look, perceive, reason, and behave like real people in real-world 3D scenes. The World Layer reconstructs semantically rich 3D Gaussian scenes from real-world videos via an occlusion-aware pipeline and accommodates animatable Gaussian-based human avatars. The Agent Layer transforms these avatars into autonomous humanoid agents, equipping them with first-person RGB-D perception and enabling them to perform accurate, embodied planning with spatial awareness and iterative reasoning, which is then executed at the low level as full-body actions to drive their behaviors in the scene. We further introduce a benchmark to evaluate humanoid-scene interaction in diverse reconstructed environments. Experiments show our agents achieve robust autonomous behavior, yielding higher task success rates and fewer collisions than ablations and state-of-the-art planning methods. This work enables active digital human population and advances human-centric embodied AI. Data, code, and models will be open-sourced.

Paper Structure

This paper contains 27 sections, 6 equations, 23 figures, 9 tables.

Figures (23)

  • Figure 1: Visually-grounded Virtual Agents in Realistic 3D Scenes. From monocular videos, our framework reconstructs a high-fidelity 3D environment with rich semantics and instantiates high-fidelity humanoid agents aligned with the scene. Each agent perceives the world through its own egocentric view and acts autonomously, enabling realistic and purposeful behaviors within the reconstructed environment.
  • Figure 2: Framework Overview. Our framework consists of two layers. The World Layer processes real-world data (scene videos, object assets, human videos) to build large-scale, semantically detailed environments via occlusion-aware reconstruction, and populates them with GS-based animatable human avatars (\ref{['sec:scene_recon_layer']}). Then the Agent Layer drives these avatars for human-scene interaction via a perception-action loop, where visually-grounded agents plan actions from egocentric observations (\ref{['sec:method_human']}).
  • Figure 3: Overview of the Occlusion-Aware Semantic Scene Reconstruction. We first reconstruct 3D Gaussians from scene videos utilizing CityGaussian liu2024citygaussianliu2024citygaussianv2. To augment 3DGS with instance-level semantics, we extract 2D masks $\boldsymbol{B}$ based on SAM kirillov2023sam, lift them to 3D via contrastive learning, and then segment 3D instances using coarse-to-fine quantization. We introduce occlusion-aware masks and view selection to boost segmentation accuracy in large-scale, occluded outdoor scenes. Finally, each instance is annotated via context-aware visual prompting with a VLM bai2025qwen2, yielding a semantically rich environment with spatially annotated landmarks, ready for human-scene interaction.
  • Figure 4: Our Visually Grounded Humanoid Agent comprises a two-level framework: (1) A context-aware action planning module (high-level planner) that predicts actions from ego-centric observations. It utilizes spatial-aware visual prompting to generate physically viable, spatially grounded proposals and apply goal highlighting for contextual cues, combined with iterative reasoning for multi-step decision making (\ref{['sec:high_level_vlm']}). (2) A controllable motion generation module (low-level controller) that converts the planner's command into waypoints, which then condition a motion diffusion model to synthesize full-body motion (\ref{['sec:motion_gen']}).
  • Figure 5: Qualitative ablation of the VLM-based planning paradigm. Without visual prompting, the agent loses track of the goal after detouring around obstacles. Without iterative reasoning, it follows myopic straight-line paths, leading to frequent collisions. Our full model combines both to produce robust, goal-directed trajectories.
  • ...and 18 more figures