Table of Contents
Fetching ...

Koopman Subspace Pruning in Reproducing Kernel Hilbert Spaces via Principal Vectors

Dhruv Shah, Jorge Cortes

Abstract

Data-driven approximations of the infinite-dimensional Koopman operator rely on finite-dimensional projections, where the predictive accuracy of the resulting models hinges heavily on the invariance of the chosen subspace. Subspace pruning systematically discards geometrically misaligned directions to enhance this invariance proximity, which formally corresponds to the largest principal angle between the subspace and its image under the operator. Yet, existing techniques are largely restricted to Euclidean settings. To bridge this gap, this paper presents an approach for computing principal angles and vectors to enable Koopman subspace pruning within a Reproducing Kernel Hilbert Space (RKHS) geometry. We first outline an exact computational routine, which is subsequently scaled for large datasets using randomized Nystrom approximations. Based on these foundations, we introduce the Kernel-SPV and Approximate Kernel-SPV algorithms for targeted subspace refinement via principal vectors. Simulation results validate our approach.

Koopman Subspace Pruning in Reproducing Kernel Hilbert Spaces via Principal Vectors

Abstract

Data-driven approximations of the infinite-dimensional Koopman operator rely on finite-dimensional projections, where the predictive accuracy of the resulting models hinges heavily on the invariance of the chosen subspace. Subspace pruning systematically discards geometrically misaligned directions to enhance this invariance proximity, which formally corresponds to the largest principal angle between the subspace and its image under the operator. Yet, existing techniques are largely restricted to Euclidean settings. To bridge this gap, this paper presents an approach for computing principal angles and vectors to enable Koopman subspace pruning within a Reproducing Kernel Hilbert Space (RKHS) geometry. We first outline an exact computational routine, which is subsequently scaled for large datasets using randomized Nystrom approximations. Based on these foundations, we introduce the Kernel-SPV and Approximate Kernel-SPV algorithms for targeted subspace refinement via principal vectors. Simulation results validate our approach.

Paper Structure

This paper contains 14 sections, 3 theorems, 36 equations, 2 figures, 2 algorithms.

Key Result

Lemma D.1

The basis vectors of $\mathcal{K} \mathcal{S}$ are given by $\mathcal{K} \mathcal{V} = \Phi_X W_{\mathcal{K} \mathcal{V}}$, where $W_{\mathcal{K} \mathcal{V}} \in \mathbb{R}^{N \times s}$ satisfies Here, $K_{T(X),X}, K_{X,X} \in \mathbb{R}^{N \times N}$ are the kernel matrices defined according to eqn:kernel_matrix_defn. $\blacktriangleleft$$\blacktriangleleft$

Figures (2)

  • Figure F1: (Left) Residuals of the approximate orthonormal bases for $\mathcal{V}$ and $\mathcal{K}\mathcal{V}$ as a function of the number of Nyström samples $D$. (Right) Principal angles computed by the exact Kernel-SPV method and the approximate method.
  • Figure F2: Relative prediction error $| \mathcal{K}^5 \phi - \lambda^5 \phi |$ for the estimated eigenfunction $\phi$ corresponding to $\lambda \approx 1$, obtained from Kernel EDMD (left) and after pruning with Approximate Kernel-SPV (right).

Theorems & Definitions (10)

  • Remark C.4
  • Lemma D.1: Computing $W_{\mathcal{K} \mathcal{V}}$
  • proof
  • Remark D.2: Computational Complexity
  • Lemma D.3
  • proof
  • Remark D.4
  • Theorem D.5
  • proof
  • Remark E.1