Table of Contents
Fetching ...

A Neural Tension Operator for Curve Subdivision across Constant Curvature Geometries

Hassan Ugail, Newton Howard

Abstract

Interpolatory subdivision schemes generate smooth curves from piecewise-linear control polygons by repeatedly inserting new vertices. Classical schemes rely on a single global tension parameter and typically require separate formulations in Euclidean, spherical, and hyperbolic geometries. We introduce a shared learned tension predictor that replaces the global parameter with per-edge insertion angles predicted by a single 140K-parameter network. The network takes local intrinsic features and a trainable geometry embedding as input, and the predicted angles drive geometry-specific insertion operators across all three spaces without architectural modification. A constrained sigmoid output head enforces a structural safety bound, guaranteeing that every inserted vertex lies within a valid angular range for any finite weight configuration. Three theoretical results accompany the method: a structural guarantee of tangent-safe insertions; a heuristic motivation for per-edge adaptivity; and a conditional convergence certificate for continuously differentiable limit curves, subject to an explicit Lipschitz constraint verified post hoc. On 240 held-out validation curves, the learned predictor occupies a distinct position on the fidelity--smoothness Pareto frontier, achieving markedly lower bending energy and angular roughness than all fixed-tension and manifold-lift baselines. Riemannian manifold lifts retain a pointwise-fidelity advantage, which this study quantifies directly. On the out-of-distribution ISS orbital ground-track example, bending energy falls by 41% and angular roughness by 68% with only a modest increase in Hausdorff distance, suggesting that the predictor generalises beyond its synthetic training distribution.

A Neural Tension Operator for Curve Subdivision across Constant Curvature Geometries

Abstract

Interpolatory subdivision schemes generate smooth curves from piecewise-linear control polygons by repeatedly inserting new vertices. Classical schemes rely on a single global tension parameter and typically require separate formulations in Euclidean, spherical, and hyperbolic geometries. We introduce a shared learned tension predictor that replaces the global parameter with per-edge insertion angles predicted by a single 140K-parameter network. The network takes local intrinsic features and a trainable geometry embedding as input, and the predicted angles drive geometry-specific insertion operators across all three spaces without architectural modification. A constrained sigmoid output head enforces a structural safety bound, guaranteeing that every inserted vertex lies within a valid angular range for any finite weight configuration. Three theoretical results accompany the method: a structural guarantee of tangent-safe insertions; a heuristic motivation for per-edge adaptivity; and a conditional convergence certificate for continuously differentiable limit curves, subject to an explicit Lipschitz constraint verified post hoc. On 240 held-out validation curves, the learned predictor occupies a distinct position on the fidelity--smoothness Pareto frontier, achieving markedly lower bending energy and angular roughness than all fixed-tension and manifold-lift baselines. Riemannian manifold lifts retain a pointwise-fidelity advantage, which this study quantifies directly. On the out-of-distribution ISS orbital ground-track example, bending energy falls by 41% and angular roughness by 68% with only a modest increase in Hausdorff distance, suggesting that the predictor generalises beyond its synthetic training distribution.

Paper Structure

This paper contains 38 sections, 5 theorems, 36 equations, 9 figures, 5 tables, 1 algorithm.

Key Result

Theorem 1

Let $f_\theta : \mathbb{R}^7 \to \mathbb{R}$ denote the scalar output of the tension predictor before the final rescaling. Define, with $(\alpha_{\min}, \alpha_{\max}) = (-\tfrac{\pi}{4}+0.02,\; \tfrac{\pi}{4}-0.02)$ and $\sigma$ the standard logistic function. Then for any $\theta \in \mathbb{R}^n$ and any input $\mathbf{x}_j \in \mathbb{R}^7$,

Figures (9)

  • Figure 1: Qualitative comparison on Euclidean $\mathbb{E}^2$ (near-circular polygon). Four panels: control polygon; four-point ($\mu{=}0$); six-point ($\mu{=}{-}0.25$); learned predictor ($k{=}4$ levels). Ground truth dashed (grey).
  • Figure 2: Qualitative comparison on Euclidean $\mathbb{E}^2$ (elongated ellipse). Four panels: control polygon; four-point ($\mu{=}0$); six-point ($\mu{=}{-}0.25$); learned predictor ($k{=}4$ levels). Ground truth dashed (grey). High-curvature tips visible at either end of the major axis.
  • Figure 3: Qualitative comparison on Spherical $\mathbb{S}^2$ (four-lobe curve). Four panels as in Fig. \ref{['fig:qual_eucl_1']}. Angular oscillations are visible at each lobe in the classical outputs.
  • Figure 4: Qualitative comparison on Spherical $\mathbb{S}^2$ (irregular open-style curve). Four panels as in Fig. \ref{['fig:qual_eucl_1']}. Severe jagged artefacts are visible in the classical outputs.
  • Figure 5: Metric comparison for the three primary baselines across 240 validation curves at the epoch-200 checkpoint. Rows: $\mathbb{E}^2$, $\mathbb{S}^2$, $\mathbb{H}^2$. Columns: symmetric Hausdorff distance ($d_H$), bending energy and $G^1$ proxy (all lower is better). Error bars show $\pm1$ standard deviation. Annotated percentages give Hausdorff change relative to the four-point baseline. Log-exp lifts and the oracle are omitted for clarity; see Table \ref{['tab:aggregate']} for the full comparison.
  • ...and 4 more figures

Theorems & Definitions (13)

  • Theorem 1
  • proof
  • Remark 1
  • Remark 2: Heuristic motivation for adaptivity
  • Remark 3: Empirical ceiling of fixed-tension
  • Theorem 2
  • proof
  • Corollary 3
  • proof
  • Remark 4
  • ...and 3 more