Table of Contents
Fetching ...

ICTPolarReal: A Polarized Reflection and Material Dataset of Real World Objects

Jing Yang, Krithika Dharanikota, Emily Jia, Haiwei Chen, Yajie Zhao

Abstract

Accurately modeling how real-world materials reflect light remains a core challenge in inverse rendering, largely due to the scarcity of real measured reflectance data. Existing approaches rely heavily on synthetic datasets with simplified illumination and limited material realism, preventing models from generalizing to real-world images. We introduce a large-scale polarized reflection and material dataset of real-world objects, captured with an 8-camera, 346-light Light Stage equipped with cross/parallel polarization. Our dataset spans 218 everyday objects across five acquisition dimensions-multiview, multi-illumination, polarization, reflectance separation, and material attributes-yielding over 1.2M high-resolution images with diffuse-specular separation and analytically derived diffuse albedo, specular albedo, and surface normals. Using this dataset, we train and evaluate state-of-the-art inverse and forward rendering models on intrinsic decomposition, relighting, and sparse-view 3D reconstruction, demonstrating significant improvements in material separation, illumination fidelity, and geometric consistency. We hope that our work can establish a new foundation for physically grounded material understanding and enable real-world generalization beyond synthetic training regimes. Project page: https://jingyangcarl.github.io/ICTPolarReal/

ICTPolarReal: A Polarized Reflection and Material Dataset of Real World Objects

Abstract

Accurately modeling how real-world materials reflect light remains a core challenge in inverse rendering, largely due to the scarcity of real measured reflectance data. Existing approaches rely heavily on synthetic datasets with simplified illumination and limited material realism, preventing models from generalizing to real-world images. We introduce a large-scale polarized reflection and material dataset of real-world objects, captured with an 8-camera, 346-light Light Stage equipped with cross/parallel polarization. Our dataset spans 218 everyday objects across five acquisition dimensions-multiview, multi-illumination, polarization, reflectance separation, and material attributes-yielding over 1.2M high-resolution images with diffuse-specular separation and analytically derived diffuse albedo, specular albedo, and surface normals. Using this dataset, we train and evaluate state-of-the-art inverse and forward rendering models on intrinsic decomposition, relighting, and sparse-view 3D reconstruction, demonstrating significant improvements in material separation, illumination fidelity, and geometric consistency. We hope that our work can establish a new foundation for physically grounded material understanding and enable real-world generalization beyond synthetic training regimes. Project page: https://jingyangcarl.github.io/ICTPolarReal/

Paper Structure

This paper contains 23 sections, 5 equations, 5 figures, 4 tables.

Figures (5)

  • Figure 1: Overview of our polarized reflection and material dataset. Our Light Stage capture system records real-world objects under controlled lighting, polarization, and multiview conditions. The dataset spans five acquisition dimensions: objects, lighting, polarization, multiview, and material. Each object is captured under 346 One-Light-at-a-Time (OLAT) illuminations and can be synthesized into arbitrary HDRI lighting. Cross and parallel polarization enable reflection separation into diffuse and specular components, from which material attributes such as albedo and normal are derived, resulting in more than 1,200,000 (1.2 million) captured images at 6144×3240 resolution.
  • Figure 2: Decomposition results on Objaverse samples. Each row shows the predicted albedo, normal, and specular components from DR-IR, RGB2X, and ours under two lighting conditions. An error map is shown in the lower-left corner of each result. Post-training with our dataset produces more consistent decompositions and smoother material separation across illumination changes.
  • Figure 3: Forward relighting results on real objects (Light Stage dataset) under HDRI lighting Each row shows the relighting outputs from DR-FR, RGB2X, and our model under both the PBR and polarization workflows. The top diagram illustrates the full inverse-to-forward relighting pipeline. For each relit result, the error map is shown in the lower-left corner and the lighting reference in the lower-right corner.
  • Figure 4: Forward relighting results on the Objaverse dataset under OLAT lighting. The lighting reference is shown in the lower-left corner.
  • Figure 5: Impact of our inverse model on sparse-view 3D reconstruction. We compare reconstructions from Dust3r and Mast3r using raw input images and our predicted diffuse images from the inverse decomposition model. The reconstruction takes 8 sparse multiview input.