Table of Contents
Fetching ...

Tunable Domain Adaptation Using Unfolding

Snehaa Reddy, Jayaprakash Katual, Satish Mulleti

Abstract

Machine learning models often struggle to generalize across domains with varying data distributions, such as differing noise levels, leading to degraded performance. Traditional strategies like personalized training, which trains separate models per domain, and joint training, which uses a single model for all domains, have significant limitations in flexibility and effectiveness. To address this, we propose two novel domain adaptation methods for regression tasks based on interpretable unrolled networks--deep architectures inspired by iterative optimization algorithms. These models leverage the functional dependence of select tunable parameters on domain variables, enabling controlled adaptation during inference. Our methods include Parametric Tunable-Domain Adaptation (P-TDA), which uses known domain parameters for dynamic tuning, and Data-Driven Tunable-Domain Adaptation (DD-TDA), which infers domain adaptation directly from input data. We validate our approach on compressed sensing problems involving noise-adaptive sparse signal recovery, domain-adaptive gain calibration, and domain-adaptive phase retrieval, demonstrating improved or comparable performance to domain-specific models while surpassing joint training baselines. This work highlights the potential of unrolled networks for effective, interpretable domain adaptation in regression settings.

Tunable Domain Adaptation Using Unfolding

Abstract

Machine learning models often struggle to generalize across domains with varying data distributions, such as differing noise levels, leading to degraded performance. Traditional strategies like personalized training, which trains separate models per domain, and joint training, which uses a single model for all domains, have significant limitations in flexibility and effectiveness. To address this, we propose two novel domain adaptation methods for regression tasks based on interpretable unrolled networks--deep architectures inspired by iterative optimization algorithms. These models leverage the functional dependence of select tunable parameters on domain variables, enabling controlled adaptation during inference. Our methods include Parametric Tunable-Domain Adaptation (P-TDA), which uses known domain parameters for dynamic tuning, and Data-Driven Tunable-Domain Adaptation (DD-TDA), which infers domain adaptation directly from input data. We validate our approach on compressed sensing problems involving noise-adaptive sparse signal recovery, domain-adaptive gain calibration, and domain-adaptive phase retrieval, demonstrating improved or comparable performance to domain-specific models while surpassing joint training baselines. This work highlights the potential of unrolled networks for effective, interpretable domain adaptation in regression settings.

Paper Structure

This paper contains 31 sections, 30 equations, 15 figures, 5 tables.

Figures (15)

  • Figure 1: Layers/iterations of different LISTA architectures: (a) Conventional LISTA, (b), (c) NA-LISTA. (b) Single layer of the P-DTDA model with input data $\left\{ \mathbf{y, A}\right\}$ and domain parameter $\boldsymbol{\sigma}$ available during inference. (c) Single layer of DD-TDA model with only input data $\left\{ \mathbf{y, A}\right\}$ available during inference. Note: Green blocks represent the data available at inference time, and Gray blocks are trainable parameters.
  • Figure 2: Comparison of the proposed methods with the JT and PT methods for NA-LISTA for different SNR ranges. PT's performances are better than JT's for each domain. The P-TDA/DD-TDA has comparable performance to PT, demonstrating the noise-tunability aspect.
  • Figure 3: Comparison of the proposed methods with the JT and PT methods for NA-LISTA for broad SNR ranges, averaged over 5 experiments.
  • Figure 4: Qualitative MNIST-CS reconstructions across SNR domains. Columns show ground truth (GT) and reconstructions from JT (LISTA), Tail-LISTA (tight), DDIM (diffusion model) and the proposed NA-LISTA with P-TDA and DD-TDA.
  • Figure 5: Qualitative MNIST-CS reconstructions at SNR $-10$dB. Columns show ground truth (GT) and reconstructions from JT (LISTA), Tail-LISTA (tight), DDIM (diffusion model) and the proposed NA-LISTA with P-TDA and DD-TDA.
  • ...and 10 more figures