Table of Contents
Fetching ...

Adaptive Subspace Modeling With Functional Tucker Decomposition

Noah Steidle, Joppe De Jonghe, Mariya Ishteva

Abstract

Tensors provide a structured representation for multidimensional data, yet discretization can obscure important information when such data originates from continuous processes. We address this limitation by introducing a functional Tucker decomposition (FTD) that embeds mode-wise continuity constraints directly into the decomposition. The FTD employs reproducing kernel Hilbert spaces (RKHS) to model continuous modes without requiring an a-priori basis, while preserving the multi-linear subspace structure of the Tucker model. Through RKHS-driven representation, the model yields adaptive and expressive factor descriptions that enable targeted modeling of subspaces. The value of this approach is demonstrated in domain-variant tensor classification. In particular, we illustrate its effectiveness with classification tasks in hyperspectral imaging and multivariate time series analysis, highlighting the benefits of combining structural decomposition with functional adaptability.

Adaptive Subspace Modeling With Functional Tucker Decomposition

Abstract

Tensors provide a structured representation for multidimensional data, yet discretization can obscure important information when such data originates from continuous processes. We address this limitation by introducing a functional Tucker decomposition (FTD) that embeds mode-wise continuity constraints directly into the decomposition. The FTD employs reproducing kernel Hilbert spaces (RKHS) to model continuous modes without requiring an a-priori basis, while preserving the multi-linear subspace structure of the Tucker model. Through RKHS-driven representation, the model yields adaptive and expressive factor descriptions that enable targeted modeling of subspaces. The value of this approach is demonstrated in domain-variant tensor classification. In particular, we illustrate its effectiveness with classification tasks in hyperspectral imaging and multivariate time series analysis, highlighting the benefits of combining structural decomposition with functional adaptability.

Paper Structure

This paper contains 25 sections, 1 theorem, 44 equations, 12 figures, 1 table, 1 algorithm.

Key Result

Theorem 1

Denote by $\Omega:[0,\infty) \rightarrow \mathbb{R}$ a strictly monotonic increasing function, by $I=\{x_1,\ldots,x_p\}$ a set, and by $c:(I \times \mathbb{R}^2)^p \rightarrow \mathbb{R} \cup \{ \infty \}$ an arbitrary loss function. Then each minimizer $f^*\in\mathcal{H}_\mathcal{K}$ of the regular admits a representation of the form

Figures (12)

  • Figure 1: Overview of the domain transfer framework driven by the functional Tucker decomposition (FTD). (a) An input tensor $\mathbfcal{X}$ is interpreted as slices in the third mode corresponding to sampling points $x_1$, $x_2$, $x_3$, and $x_4$. (b) The FTD allows to describe the functional mode in terms of different sampling points $\widetilde{x}_1$, $\widetilde{x}_2$, $\widetilde{x}_3$, and $\widetilde{x}_4$. (c) The adapted sampling points $\widetilde{x}_1$, $\widetilde{x}_2$, $\widetilde{x}_3$, and $\widetilde{x}_4$ allow to reconstruct the tensor $\mathbfcal{X}$ in a different domain, yielding a new tensor $\widetilde{\mathbfcal{X}}$.
  • Figure 2: Illustration of expanding a digit sample by an additional mode based on two independent smoothing splines.
  • Figure 3: Original and reconstruction of $\mathbfcal{X}_5(1,5,5,:)$.
  • Figure 4: Success rates of HOSVD- and FTD-driven classification for equal (l.h.s.) and different (r.h.s.) training and test domains for the semi-synthetic digit data. While the FTD and the HOSVD achieve comparable performance when the sampling points during training and test match, a noticeable performance gap emerges when different sampling points are used.
  • Figure 5: Macro F1 scores of HOSVD- and FTD-driven classification for equal (l.h.s.) and different (r.h.s.) training and test domains for the semi-synthetic digit data. As with the classification accuracy, the FTD and HOSVD perform similarly when training and test use identical sampling points. When the sampling points differ, however, the adaptive subspace modeling allows the FTD to adapt accordingly.
  • ...and 7 more figures

Theorems & Definitions (1)

  • Theorem 1: Representer theorem SchoelkopfSmola2001