Table of Contents
Fetching ...

Task Scarcity and Label Leakage in Relational Transfer Learning

Francisco Galuppo Azevedo, Clarissa Lima Loures, Denis Oliveira Correa

Abstract

Training relational foundation models requires learning representations that transfer across tasks, yet available supervision is typically limited to a small number of prediction targets per database. This task scarcity causes learned representations to encode task-specific shortcuts that degrade transfer even within the same schema, a problem we call label leakage. We study this using K-Space, a modular architecture combining frozen pretrained tabular encoders with a lightweight message-passing core. To suppress leakage, we introduce a gradient projection method that removes label-predictive directions from representation updates. On RelBench, this improves within-dataset transfer by +0.145 AUROC on average, often recovering near single-task performance. Our results suggest that limited task diversity, not just limited data, constrains relational foundation models.

Task Scarcity and Label Leakage in Relational Transfer Learning

Abstract

Training relational foundation models requires learning representations that transfer across tasks, yet available supervision is typically limited to a small number of prediction targets per database. This task scarcity causes learned representations to encode task-specific shortcuts that degrade transfer even within the same schema, a problem we call label leakage. We study this using K-Space, a modular architecture combining frozen pretrained tabular encoders with a lightweight message-passing core. To suppress leakage, we introduce a gradient projection method that removes label-predictive directions from representation updates. On RelBench, this improves within-dataset transfer by +0.145 AUROC on average, often recovering near single-task performance. Our results suggest that limited task diversity, not just limited data, constrains relational foundation models.

Paper Structure

This paper contains 21 sections, 2 figures, 8 tables, 1 algorithm.

Figures (2)

  • Figure 1: Schematic of the proposed architecture. A TabICL encoder (column embedder and row interactor) encodes table features; its output is concatenated with RWPE and time features and passed through an input projection and per-table-type RoPE into a stack of $N$ Hetero (SMPNN) blocks with reversible (REV) and forward (FWD) paths. Zoomed views show the SMPNN block and its GCN and pointwise feedforward sub-blocks. An output projection feeds two heads: an adversarial MLP head and a TabICL predictor head.
  • Figure 2: Geometry of per-sample gradients in the representation space of $h$, the shared representation just before the two heads. For each sample $x_i$, the light green arrow is the in-context learning gradient $\nabla_h \ell_{\text{ICL}}(x_i)$, the red arrow is the adversary gradient $\nabla_h \ell_{\text{adv}}(x_i)$, and the dark green arrow is the refined gradient $\tilde{\nabla}_h \ell_{\text{ICL}}(x_i)$ after subtracting its projection onto $\nabla_h \ell_{\text{adv}}(x_i)$ (dashed gray segment).