Inducing Riesz and orthonormal bases in $L^2$ via composition operators
Yahya Saleh, Armin Iske
TL;DR
This work characterizes when a composition operator $C_h$ between $L^2$ spaces preserves basis properties, focusing on Riesz and orthonormal bases. It establishes precise conditions on the inducing map $h$: for general (non-singular) $h$, a Riesz-basis of $L^2(\Omega_1)$ transforms to a Riesz basis of $L^2(\Omega_2)$ exactly when $h$ is injective and the Radon–Nikodym derivative $g_h$ is bounded above and below by positive constants, with the dual basis given by a multiplication-augmented transform; in the differentiable case, the criterion becomes a bound on the Jacobian determinant, $r \le \det J_h \le R$ a.e., ensuring bijectivity. The authors connect these results to approximation theory and propose constructing bases via invertible neural networks (normalizing flows) to achieve bases with favorable approximation properties. A simple numerical example illustrates how composing an orthonormal basis with a learned map can yield superior approximation of target functions, highlighting potential for problem-specific basis design in L^2 settings. Overall, the paper provides a rigorous framework for inducing Riesz/orthonormal bases through structured composition operators and demonstrates a practical avenue for basis optimization using bijective neural models.
Abstract
Let $C_h$ be a composition operator mapping $L^2(Ω_1)$ into $L^2(Ω_2)$ for some open sets $Ω_1, Ω_2 \subseteq \mathbb{R}^n$. We characterize the mappings $h$ that transform Riesz bases of $L^2(Ω_1)$ into Riesz bases of $L^2(Ω_2)$. Restricting our analysis to differentiable mappings, we demonstrate that mappings $h$ that preserve Riesz bases have Jacobian determinants that are bounded away from zero and infinity. We discuss implications of these results for approximation theory, highlighting the potential of using bijective neural networks to construct Riesz bases with favorable approximation properties.
