Table of Contents
Fetching ...

Divide and Restore: A Modular Task-Decoupled Framework for Universal Image Restoration

Joanna Wiekiera, Martyna Zur

Abstract

Restoring images affected by various types of degradation, such as noise, blur, or improper exposure, remains a significant challenge in computer vision. While recent trends favor complex monolithic all-in-one architectures, these models often suffer from negative task interference and require extensive joint training cycles on high-end computing clusters. In this paper, we propose a modular, task-decoupled image restoration framework based on an explicit diagnostic routing mechanism. The architecture consists of a lightweight Convolutional Neural Network (CNN) classifier that evaluates the input image and dynamically directs it to a specialized restoration node. A key advantage of this framework is its model-agnostic extensibility: while we demonstrate it using three independent U-Net experts, the system allows for the integration of any restoration method tailored to specific tasks. By isolating reconstruction paths, the framework prevents feature conflicts and significantly reduces training overhead. Unlike monolithic models, adding new degradation types in our framework only requires training a single expert and updating the router, rather than a full system retraining. Experimental results demonstrate that this computationally accessible approach offers a scalable and efficient solution for multi-degradation restoration on standard local hardware. The code will be published upon paper acceptance.

Divide and Restore: A Modular Task-Decoupled Framework for Universal Image Restoration

Abstract

Restoring images affected by various types of degradation, such as noise, blur, or improper exposure, remains a significant challenge in computer vision. While recent trends favor complex monolithic all-in-one architectures, these models often suffer from negative task interference and require extensive joint training cycles on high-end computing clusters. In this paper, we propose a modular, task-decoupled image restoration framework based on an explicit diagnostic routing mechanism. The architecture consists of a lightweight Convolutional Neural Network (CNN) classifier that evaluates the input image and dynamically directs it to a specialized restoration node. A key advantage of this framework is its model-agnostic extensibility: while we demonstrate it using three independent U-Net experts, the system allows for the integration of any restoration method tailored to specific tasks. By isolating reconstruction paths, the framework prevents feature conflicts and significantly reduces training overhead. Unlike monolithic models, adding new degradation types in our framework only requires training a single expert and updating the router, rather than a full system retraining. Experimental results demonstrate that this computationally accessible approach offers a scalable and efficient solution for multi-degradation restoration on standard local hardware. The code will be published upon paper acceptance.

Paper Structure

This paper contains 19 sections, 1 equation, 5 figures, 2 tables.

Figures (5)

  • Figure 1: Overview of the DaR-Net framework. The input image is first evaluated by a lightweight CNN classifier, which routes it to the corresponding U-Net expert for task-specific restoration.
  • Figure 2: Classifier accuracy per degradation class across continual learning phases, evaluated on BSD68. Stars mark the phase in which each class was introduced. Clean has no star as it corresponds to unmodified images present throughout all phases. Experience replay prevents accuracy drops on previously learned classes.
  • Figure 3: Gaussian noise denoising on a BSD68 test image across three noise levels. Each row shows the corrupted input, DaR-Net restoration, and ground truth, with metrics reported as PSNR [dB] / SSIM / LPIPS zhang2018unreasonable$\downarrow$. Top ($\sigma{=}15$): 30.61 / 0.908 / 0.153. Middle ($\sigma{=}25$): 27.95 / 0.841 / 0.218. Bottom ($\sigma{=}50$): 24.60 / 0.702 / 0.359.
  • Figure 4: Gaussian blur restoration on a BSD68 test image across three blur levels. Each row shows the corrupted input, DaR-Net restoration, and ground truth, with metrics reported as PSNR [dB] / SSIM / LPIPS $\downarrow$. Top ($\sigma{=}1.0$): 30.58 / 0.931 / 0.078. Middle ($\sigma{=}1.5$): 25.18 / 0.766 / 0.382. Bottom ($\sigma{=}2.6$): 24.05 / 0.702 / 0.344.
  • Figure 5: Overexposure correction on a BSD68 test image across three exposure levels. Each row shows the corrupted input, DaR-Net restoration, and ground truth, with metrics reported as PSNR [dB] / SSIM / LPIPS $\downarrow$. Top ($\gamma{=}1.4$): 24.22 / 0.963 / 0.068. Middle ($\gamma{=}1.7$): 22.58 / 0.908 / 0.126. Bottom ($\gamma{=}2.0$): 20.50 / 0.850 / 0.181.