Table of Contents
Fetching ...

Light Cones For Vision: Simple Causal Priors For Visual Hierarchy

Manglam Kartik, Neel Tushar Shah

Abstract

Standard vision models treat objects as independent points in Euclidean space, unable to capture hierarchical structure like parts within wholes. We introduce Worldline Slot Attention, which models objects as persistent trajectories through spacetime worldlines, where each object has multiple slots at different hierarchy levels sharing the same spatial position but differing in temporal coordinates. This architecture consistently fails without geometric structure: Euclidean worldlines achieve 0.078 level accuracy, below random chance (0.33), while Lorentzian worldlines achieve 0.479-0.661 across three datasets: a 6x improvement replicated over 20+ independent runs. Lorentzian geometry also outperforms hyperbolic embeddings showing visual hierarchies require causal structure (temporal dependency) rather than tree structure (radial branching). Our results demonstrate that hierarchical object discovery requires geometric structure encoding asymmetric causality, an inductive bias absent from Euclidean space but natural to Lorentzian light cones, achieved with only 11K parameters. The code is available at: https://github.com/iclrsubmissiongram/loco.

Light Cones For Vision: Simple Causal Priors For Visual Hierarchy

Abstract

Standard vision models treat objects as independent points in Euclidean space, unable to capture hierarchical structure like parts within wholes. We introduce Worldline Slot Attention, which models objects as persistent trajectories through spacetime worldlines, where each object has multiple slots at different hierarchy levels sharing the same spatial position but differing in temporal coordinates. This architecture consistently fails without geometric structure: Euclidean worldlines achieve 0.078 level accuracy, below random chance (0.33), while Lorentzian worldlines achieve 0.479-0.661 across three datasets: a 6x improvement replicated over 20+ independent runs. Lorentzian geometry also outperforms hyperbolic embeddings showing visual hierarchies require causal structure (temporal dependency) rather than tree structure (radial branching). Our results demonstrate that hierarchical object discovery requires geometric structure encoding asymmetric causality, an inductive bias absent from Euclidean space but natural to Lorentzian light cones, achieved with only 11K parameters. The code is available at: https://github.com/iclrsubmissiongram/loco.

Paper Structure

This paper contains 46 sections, 15 equations, 4 figures, 4 tables, 1 algorithm.

Figures (4)

  • Figure 1: CLEVR hierarchical point cloud visualization.Top left: Single scene colored by object identity (6 objects). Top right: Same scene colored by hierarchy level. Bottom: Four example scenes with varying object counts (3, 6, 9, 10 objects). Each object decomposes into three hierarchy levels with density-based structure: sparse cores (red, L0), medium surfaces (blue, L1), dense interiors (orange, L2). This density stratification enables our Lorentzian worldline method to discover hierarchy via local k-NN distances mapped to temporal coordinates.
  • Figure : Car with its subparts
  • Figure : Car with its subparts
  • Figure : Lorentzian Cones in Minkowski Space