Table of Contents
Fetching ...

Expectation Error Bounds for Transfer Learning in Linear Regression and Linear Neural Networks

Meitong Liu, Christopher Jung, Rui Li, Xue Feng, Han Zhao

Abstract

In transfer learning, the learner leverages auxiliary data to improve generalization on a main task. However, the precise theoretical understanding of when and how auxiliary data help remains incomplete. We provide new insights on this issue in two canonical linear settings: ordinary least squares regression and under-parameterized linear neural networks. For linear regression, we derive exact closed-form expressions for the expected generalization error with bias-variance decomposition, yielding necessary and sufficient conditions for auxiliary tasks to improve generalization on the main task. We also derive globally optimal task weights as outputs of solvable optimization programs, with consistency guarantees for empirical estimates. For linear neural networks with shared representations of width $q \leq K$, where $K$ is the number of auxiliary tasks, we derive a non-asymptotic expectation bound on the generalization error, yielding the first non-vacuous sufficient condition for beneficial auxiliary learning in this setting, as well as principled directions for task weight curation. We achieve this by proving a new column-wise low-rank perturbation bound for random matrices, which improves upon existing bounds by preserving fine-grained column structures. Our results are verified on synthetic data simulated with controlled parameters.

Expectation Error Bounds for Transfer Learning in Linear Regression and Linear Neural Networks

Abstract

In transfer learning, the learner leverages auxiliary data to improve generalization on a main task. However, the precise theoretical understanding of when and how auxiliary data help remains incomplete. We provide new insights on this issue in two canonical linear settings: ordinary least squares regression and under-parameterized linear neural networks. For linear regression, we derive exact closed-form expressions for the expected generalization error with bias-variance decomposition, yielding necessary and sufficient conditions for auxiliary tasks to improve generalization on the main task. We also derive globally optimal task weights as outputs of solvable optimization programs, with consistency guarantees for empirical estimates. For linear neural networks with shared representations of width , where is the number of auxiliary tasks, we derive a non-asymptotic expectation bound on the generalization error, yielding the first non-vacuous sufficient condition for beneficial auxiliary learning in this setting, as well as principled directions for task weight curation. We achieve this by proving a new column-wise low-rank perturbation bound for random matrices, which improves upon existing bounds by preserving fine-grained column structures. Our results are verified on synthetic data simulated with controlled parameters.

Paper Structure

This paper contains 37 sections, 21 theorems, 67 equations, 2 figures, 1 table.

Key Result

theorem 1

Let $N$ be the number of samples and $d$ be the feature dimension with $N > d + 3$. Define the weight matrix $\Lambda = \operatorname{diag}(\{\sqrt{\lambda_k}\}_{k=1}^K \cup \{1\})$ and the task matrix $W^* = [w^*_1, \ldots, w^*_K, w^*_m]$, where $w^*_m$ (resp. $w^*_k$) is the true model for the mai where $\mathcal{O}(\cdot)$ reflects the orders of $N$ and $d$, and $\sigma_{q+1}(\cdot)$ denotes th

Figures (2)

  • Figure 1: Simulation results of for both linear regression and linear neural networks.
  • Figure 2: Linear regression with $K=2$.

Theorems & Definitions (37)

  • theorem 1: informal; see \ref{['th:4.1']}
  • proposition 1: restated from wu2020understanding
  • proposition 2: restated from hu2023revisiting
  • theorem 2
  • proof
  • corollary 1
  • theorem 3
  • proof
  • theorem 4
  • proof
  • ...and 27 more