Table of Contents
Fetching ...

Setup-Independent Full Projector Compensation

Haibo Li, Qingyue Deng, Jijiang Li, Haibin Ling, Bingyao Huang

Abstract

Projector compensation seeks to correct geometric and photometric distortions that occur when images are projected onto nonplanar or textured surfaces. However, most existing methods are highly setup-dependent, requiring fine-tuning or retraining whenever the surface, lighting, or projector-camera pose changes. Progress has been limited by two key challenges: (1) the absence of large, diverse training datasets and (2) existing geometric correction models are typically constrained by specific spatial setups; without further retraining or fine-tuning, they often fail to generalize directly to novel geometric configurations. We introduce SIComp, the first Setup-Independent framework for full projector Compensation, capable of generalizing to unseen setups without fine-tuning or retraining. To enable this, we construct a large-scale real-world dataset spanning 277 distinct projector-camera setups. SIComp adopts a co-adaptive design that decouples geometry and photometry: A carefully tailored optical flow module performs online geometric correction, while a novel photometric network handles photometric compensation. To further enhance robustness under varying illumination, we integrate intensity-varying surface priors into the network design. Extensive experiments demonstrate that SIComp consistently produces high-quality compensation across diverse unseen setups, substantially outperforming existing methods in terms of generalization ability and establishing the first generalizable solution to projector compensation. The code and dataset are available on our project page: https://hai-bo-li.github.io/SIComp/

Setup-Independent Full Projector Compensation

Abstract

Projector compensation seeks to correct geometric and photometric distortions that occur when images are projected onto nonplanar or textured surfaces. However, most existing methods are highly setup-dependent, requiring fine-tuning or retraining whenever the surface, lighting, or projector-camera pose changes. Progress has been limited by two key challenges: (1) the absence of large, diverse training datasets and (2) existing geometric correction models are typically constrained by specific spatial setups; without further retraining or fine-tuning, they often fail to generalize directly to novel geometric configurations. We introduce SIComp, the first Setup-Independent framework for full projector Compensation, capable of generalizing to unseen setups without fine-tuning or retraining. To enable this, we construct a large-scale real-world dataset spanning 277 distinct projector-camera setups. SIComp adopts a co-adaptive design that decouples geometry and photometry: A carefully tailored optical flow module performs online geometric correction, while a novel photometric network handles photometric compensation. To further enhance robustness under varying illumination, we integrate intensity-varying surface priors into the network design. Extensive experiments demonstrate that SIComp consistently produces high-quality compensation across diverse unseen setups, substantially outperforming existing methods in terms of generalization ability and establishing the first generalizable solution to projector compensation. The code and dataset are available on our project page: https://hai-bo-li.github.io/SIComp/

Paper Structure

This paper contains 44 sections, 8 equations, 14 figures, 6 tables.

Figures (14)

  • Figure 1: SIComp pipeline. (a) Data preparation phase, including the acquisition of surface images and various captured projection images collected under diverse setups, followed by masking and cropping. (b) SIComp fine-tuning pipeline. It comprises a flow estimator module for geometric correction, which is pre-trained on the Sintel Butler:ECCV:2012 dataset. A pre-trained IVPCNet performs photometric compensation, inferring the projector input image based on optical flow warped images, whose pre-training process is detailed in \ref{['fig:IVPCNet']}. (c) Real compensation application, where the trained SIComp takes a desired image $x'$ and the optical flow (pre-estimated from an initial reference projection) as input to infer the compensation image, which is then physically projected and captured by a camera to approximate the desired image $x'$. The symbol $T$ denotes the warping operation applied using the estimated optical flow.
  • Figure 2: Pre-training pipeline for IVPCNet. IVPCNet, a siamese U-Net architecture, is pre-trained using two different domains. The first is the camera sampling domain (solid blue border). It takes a camera-captured image warped by structured light and the corresponding surface image as input, predicting an output $\hat{x}$. The second is the compensation domain (solid green border), where the network learns to predict a compensation image. It uses the projector input image $x$ and the same warped surface image to predict $\hat{x}^{*}$, with the loss computed against a target compensation image $x^{*}$. This target image $x^{*}$ is derived for the specific hardware setup using Nayar's iterative refinement method nayar2003projection. This pre-training process yields the IVPCNet model for subsequent fine-tuning within the SIComp framework (see \ref{['fig:SIComp_overall']}). The blue, pink, and orange lines denote the skip connections at stages s1, s2, and s3, respectively.
  • Figure 3: Projector and camera field of view (FOV). In the projector FOV mask, the white region represents the projector's FOV. The camera's FOV encompasses the broader scene, represented by the blue region, and the optimal viewing area is highlighted by the red boundary. Due to the projector's FOV and potential geometric distortions of the surface, an affine transformation is applied to the projector input image, resulting in the ideal visualization after accounting for the projector's FOV limitations and the surface geometric distortions, i.e., desired ground truth (GT).
  • Figure 4: Intensity-varying surface priors. (Top) Uniform gray images of varying intensities (0, 64, 128, 191, and 255) are projected onto a surface. (Bottom) The corresponding camera captures reveal the surface's distinct reflectance properties under various projection intensities. This set of captured images provides intensity-varying surface priors, enabling SIComp to more effectively learn complex surface properties.
  • Figure 5: Qualitative results of real compensation experiments. Rows 1–2 (Set A): same ProCams devices, unseen setups. Rows 3–4 (Set B): novel ProCams devices, unseen setups. For each pair of rows: The top row displays the surface, uncompensated image, desired ground truth, and setup-dependent results (CompenHR, DeProCams, DPCS, CompenNeSt++); the bottom row shows setup-independent results (FF+CompenNeSt, FF+PANet, and SIComp with 1, 3, and 5 surfaces). Red boxes indicate magnified insets for comparison. See more results in the Supplementary Material.
  • ...and 9 more figures