Table of Contents
Fetching ...

Inducing Riesz and orthonormal bases in $L^2$ via composition operators

Yahya Saleh, Armin Iske

TL;DR

This work characterizes when a composition operator $C_h$ between $L^2$ spaces preserves basis properties, focusing on Riesz and orthonormal bases. It establishes precise conditions on the inducing map $h$: for general (non-singular) $h$, a Riesz-basis of $L^2(\Omega_1)$ transforms to a Riesz basis of $L^2(\Omega_2)$ exactly when $h$ is injective and the Radon–Nikodym derivative $g_h$ is bounded above and below by positive constants, with the dual basis given by a multiplication-augmented transform; in the differentiable case, the criterion becomes a bound on the Jacobian determinant, $r \le \det J_h \le R$ a.e., ensuring bijectivity. The authors connect these results to approximation theory and propose constructing bases via invertible neural networks (normalizing flows) to achieve bases with favorable approximation properties. A simple numerical example illustrates how composing an orthonormal basis with a learned map can yield superior approximation of target functions, highlighting potential for problem-specific basis design in L^2 settings. Overall, the paper provides a rigorous framework for inducing Riesz/orthonormal bases through structured composition operators and demonstrates a practical avenue for basis optimization using bijective neural models.

Abstract

Let $C_h$ be a composition operator mapping $L^2(Ω_1)$ into $L^2(Ω_2)$ for some open sets $Ω_1, Ω_2 \subseteq \mathbb{R}^n$. We characterize the mappings $h$ that transform Riesz bases of $L^2(Ω_1)$ into Riesz bases of $L^2(Ω_2)$. Restricting our analysis to differentiable mappings, we demonstrate that mappings $h$ that preserve Riesz bases have Jacobian determinants that are bounded away from zero and infinity. We discuss implications of these results for approximation theory, highlighting the potential of using bijective neural networks to construct Riesz bases with favorable approximation properties.

Inducing Riesz and orthonormal bases in $L^2$ via composition operators

TL;DR

This work characterizes when a composition operator between spaces preserves basis properties, focusing on Riesz and orthonormal bases. It establishes precise conditions on the inducing map : for general (non-singular) , a Riesz-basis of transforms to a Riesz basis of exactly when is injective and the Radon–Nikodym derivative is bounded above and below by positive constants, with the dual basis given by a multiplication-augmented transform; in the differentiable case, the criterion becomes a bound on the Jacobian determinant, a.e., ensuring bijectivity. The authors connect these results to approximation theory and propose constructing bases via invertible neural networks (normalizing flows) to achieve bases with favorable approximation properties. A simple numerical example illustrates how composing an orthonormal basis with a learned map can yield superior approximation of target functions, highlighting potential for problem-specific basis design in L^2 settings. Overall, the paper provides a rigorous framework for inducing Riesz/orthonormal bases through structured composition operators and demonstrates a practical avenue for basis optimization using bijective neural models.

Abstract

Let be a composition operator mapping into for some open sets . We characterize the mappings that transform Riesz bases of into Riesz bases of . Restricting our analysis to differentiable mappings, we demonstrate that mappings that preserve Riesz bases have Jacobian determinants that are bounded away from zero and infinity. We discuss implications of these results for approximation theory, highlighting the potential of using bijective neural networks to construct Riesz bases with favorable approximation properties.

Paper Structure

This paper contains 5 sections, 6 theorems, 23 equations, 4 figures, 1 table.

Key Result

Theorem 2.1

A non-singular mapping $h: \Omega_2 \to \Omega_1$ induces a composition operator from $L^2(\Omega_1)$ into $L^2(\Omega_2)$ if and only if $g_h$ is bounded $\mu-$a.e. on $\Omega_1$. In this case, the norm of $C_h$ is given by

Figures (4)

  • Figure 1: Plot of the functions $f_1$ (panel a) and $f_2$ (panel b) given by \ref{['eq:target_function']}.
  • Figure 2: Plot of the bi-Lipschitz mappings $h_1$ and $h_2$ used to perturb Hermite functions for approximating the target functions $f_1$ and $f_2$.
  • Figure 3: Convergence of the $L^2$-error in approximating the functions $f_1$ (panel a) and $f_2$ (panel b) in the linear span of Hermite functions and the perturbed bases.
  • Figure 4: Plotted are the Hermite functions $(\gamma_n)_{n=0}^3$ (solid black lines), along with their perturbations $(\gamma_n \circ h_1)_{n=0}^3$ (red lines with triangle markers) and $(\gamma_n \circ h_2)_{n=0}^3$ (blue lines with circle markers). The functions corresponding to $n=0, \ 1,\ 2,\ 3$ are plotted in panels a, b, c, d, respectively.

Theorems & Definitions (17)

  • Theorem 2.1
  • proof
  • Lemma 3.1
  • proof
  • Theorem 3.1: Induced Riesz Basis
  • proof
  • Remark 3.1
  • Theorem 3.2
  • proof
  • Theorem 3.3: Induced Orthonormal Bases
  • ...and 7 more