Table of Contents
Fetching ...

VoxelCodeBench: Benchmarking 3D World Modeling Through Code Generation

Yan Zheng, Florian Bordes

Abstract

Evaluating code generation models for 3D spatial reasoning requires executing generated code in realistic environments and assessing outputs beyond surface-level correctness. We introduce a platform VoxelCode, for analyzing code generation capabilities for 3D understanding and environment creation. Our platform integrates natural language task specification, API-driven code execution in Unreal Engine, and a unified evaluation pipeline supporting both automated metrics and human assessment. To demonstrate its utility, we construct VoxelCodeBench, a benchmark of voxel manipulation tasks spanning three reasoning dimensions: symbolic interpretation, geometric construction, and artistic composition. Evaluating leading code generation models, we find that producing executable code is far easier than producing spatially correct outputs, with geometric construction and multi-object composition proving particularly challenging. By open-sourcing our platform and benchmark, we provide the community with extensible infrastructure for developing new 3D code generation benchmarks and probing spatial reasoning in future models.

VoxelCodeBench: Benchmarking 3D World Modeling Through Code Generation

Abstract

Evaluating code generation models for 3D spatial reasoning requires executing generated code in realistic environments and assessing outputs beyond surface-level correctness. We introduce a platform VoxelCode, for analyzing code generation capabilities for 3D understanding and environment creation. Our platform integrates natural language task specification, API-driven code execution in Unreal Engine, and a unified evaluation pipeline supporting both automated metrics and human assessment. To demonstrate its utility, we construct VoxelCodeBench, a benchmark of voxel manipulation tasks spanning three reasoning dimensions: symbolic interpretation, geometric construction, and artistic composition. Evaluating leading code generation models, we find that producing executable code is far easier than producing spatially correct outputs, with geometric construction and multi-object composition proving particularly challenging. By open-sourcing our platform and benchmark, we provide the community with extensible infrastructure for developing new 3D code generation benchmarks and probing spatial reasoning in future models.

Paper Structure

This paper contains 53 sections, 5 figures, 4 tables.

Figures (5)

  • Figure 1: VoxelCodeBench evaluation pipeline. Given API documentation and a natural language task specification (left), models generate Python code manipulating voxels in 3D space (center). The generated code is executed within VoxelCode, an Unreal Engine based rendering server equipped with the Voxel Plugin 2.0voxelplugin2, producing visual outputs that can be evaluated by human annotators (right). Our benchmark spans three complexity tiers—from basic geometric primitives to compositional artistic scenes—enabling systematic assessment of spatial reasoning through code generation.
  • Figure 2: Example outputs using VoxelCode. Representative 3D voxel constructions generated by models across our benchmark, including characters and symbols, geometric and mathematical shapes, artistic animals, objects, vehicles, and natural objects, ranging from simple geometric primitives to complex multi-component objects.
  • Figure 3: Human annotation interface. Annotators view the prompt and rendered outputs from multiple viewpoints, then rate five dimensions: Has Object, Position Correct, Material Correct, Shape Correct, and Visual Quality (0--10).
  • Figure 4: Qualitative comparison across models, categories, and difficulty levels. Rows correspond to eight evaluated models; columns span three task categories (Symbolic, Geometric, Artistic) at Easy, Medium, and Hard difficulty. Empty cells indicate cases where code execution failed, resulting in no visible output due to errors in the model-generated code.
  • Figure 5: Structured geometric detail generation. Code-based generation produces objects with consistent internal structures (interior views) and fine-grained details (ladders, weapons, floor layouts) that are difficult to achieve with surface-based neural 3D generation methods.