Table of Contents
Fetching ...

UCMNet: Uncertainty-Aware Context Memory Network for Under-Display Camera Image Restoration

Daehyun Kim, Youngmin Kim, Yoon Ju Oh, Tae Hyun Kim

Abstract

Under-display cameras (UDCs) allow for full-screen designs by positioning the imaging sensor underneath the display. Nonetheless, light diffraction and scattering through the various display layers result in spatially varying and complex degradations, which significantly reduce high-frequency details. Current PSF-based physical modeling techniques and frequency-separation networks are effective at reconstructing low-frequency structures and maintaining overall color consistency. However, they still face challenges in recovering fine details when dealing with complex, spatially varying degradation. To solve this problem, we propose a lightweight \textbf{U}ncertainty-aware \textbf{C}ontext-\textbf{M}emory \textbf{Network} (\textbf{UCMNet}), for UDC image restoration. Unlike previous methods that apply uniform restoration, UCMNet performs uncertainty-aware adaptive processing to restore high-frequency details in regions with varying degradations. The estimated uncertainty maps, learned through an uncertainty-driven loss, quantify spatial uncertainty induced by diffraction and scattering, and guide the Memory Bank to retrieve region-adaptive context from the Context Bank. This process enables effective modeling of the non-uniform degradation characteristics inherent to UDC imaging. Leveraging this uncertainty as a prior, UCMNet achieves state-of-the-art performance on multiple benchmarks with 30\% fewer parameters than previous models. Project page: \href{https://kdhrick2222.github.io/projects/UCMNet/}{https://kdhrick2222.github.io/projects/UCMNet}.

UCMNet: Uncertainty-Aware Context Memory Network for Under-Display Camera Image Restoration

Abstract

Under-display cameras (UDCs) allow for full-screen designs by positioning the imaging sensor underneath the display. Nonetheless, light diffraction and scattering through the various display layers result in spatially varying and complex degradations, which significantly reduce high-frequency details. Current PSF-based physical modeling techniques and frequency-separation networks are effective at reconstructing low-frequency structures and maintaining overall color consistency. However, they still face challenges in recovering fine details when dealing with complex, spatially varying degradation. To solve this problem, we propose a lightweight \textbf{U}ncertainty-aware \textbf{C}ontext-\textbf{M}emory \textbf{Network} (\textbf{UCMNet}), for UDC image restoration. Unlike previous methods that apply uniform restoration, UCMNet performs uncertainty-aware adaptive processing to restore high-frequency details in regions with varying degradations. The estimated uncertainty maps, learned through an uncertainty-driven loss, quantify spatial uncertainty induced by diffraction and scattering, and guide the Memory Bank to retrieve region-adaptive context from the Context Bank. This process enables effective modeling of the non-uniform degradation characteristics inherent to UDC imaging. Leveraging this uncertainty as a prior, UCMNet achieves state-of-the-art performance on multiple benchmarks with 30\% fewer parameters than previous models. Project page: \href{https://kdhrick2222.github.io/projects/UCMNet/}{https://kdhrick2222.github.io/projects/UCMNet}.

Paper Structure

This paper contains 43 sections, 13 equations, 11 figures, 6 tables.

Figures (11)

  • Figure 1: PSNR/SSIM comparisons on the POLED and TOLED datasets. Each marker denotes a restoration model positioned by its PSNR (x-axis) and SSIM (y-axis). UCMNet lies in the upper-right region, delivering the best performance and computational efficiency among all competing methods.
  • Figure 2: Visual comparison of restored results (top row) and error maps (bottom row) among existing UDC restoration models and the proposed UCMNet. UCMNet shows fewer artifacts and more accurate texture reconstruction (blue: small errors, yellow: large errors).
  • Figure 3: Architecture of the proposed method for UDC image restoration. Our model follows a U-shaped encoder–decoder architecture. The core module of the encoding block is the Frequency Convolution Module (FCM), while the decoding block additionally incorporates the Uncertainty Prior Transformer (UPT) block for uncertainty-guided feature refinement.
  • Figure 4: Architecture of the Uncertainty-Prior Transformer (UPT) block. The uncertainty transformer refines the input feature $F_{in}$ by predicting an uncertainty map and retrieving context features from the Memory and Context Banks, producing the uncertainty-enhanced representation $\hat{F}$ through vertical–horizontal cross-attention. A subsequent vanilla transformer applies channel-wise self-attention to yield the final output $F_{out}$, improving restoration in uncertain and high-frequency regions.
  • Figure 5: Uncertainty maps are derived from the uncertainty-driven loss, where each decoding block includes parallel mean and variance estimators that jointly predict the restored image and its corresponding uncertainty map.
  • ...and 6 more figures