Table of Contents
Fetching ...

Night Eyes: A Reproducible Framework for Constellation-Based Corneal Reflection Matching

Virmarie Maquiling, Yasmeen Abdrabou, Enkelejda Kasneci

Abstract

Corneal reflection (glint) detection plays an important role in pupil-corneal reflection (P-CR) eye tracking, but in practice it is often handled as heuristics embedded within larger systems, making reproducibility difficult across hardware setups. We introduce a 2D geometry-driven, constellation-based pipeline for mulit-glint detection and matching, focusing on reproducibility and clear evaluation. Inspired by lost-in-space star identification, we treat glints as structured constellations rather than independent blobs. We propose a Similarity-Layout Alignment (SLA) procedure which adapts constellation matching to the specific constraints of multi-LED eye tracking. The framework brings together controlled over-detection, adaptive candidate fallback, appearance-aware scoring, and optional semantic layout priors while keeping detection and correspondence explicitly separated. Evaluated on a public multi-LED dataset, the system provides stable identity-preserving correspondence under noisy conditions. We release code, presets, and evaluation scripts to enable transparent replication, comparison, and dataset annotation.

Night Eyes: A Reproducible Framework for Constellation-Based Corneal Reflection Matching

Abstract

Corneal reflection (glint) detection plays an important role in pupil-corneal reflection (P-CR) eye tracking, but in practice it is often handled as heuristics embedded within larger systems, making reproducibility difficult across hardware setups. We introduce a 2D geometry-driven, constellation-based pipeline for mulit-glint detection and matching, focusing on reproducibility and clear evaluation. Inspired by lost-in-space star identification, we treat glints as structured constellations rather than independent blobs. We propose a Similarity-Layout Alignment (SLA) procedure which adapts constellation matching to the specific constraints of multi-LED eye tracking. The framework brings together controlled over-detection, adaptive candidate fallback, appearance-aware scoring, and optional semantic layout priors while keeping detection and correspondence explicitly separated. Evaluated on a public multi-LED dataset, the system provides stable identity-preserving correspondence under noisy conditions. We release code, presets, and evaluation scripts to enable transparent replication, comparison, and dataset annotation.

Paper Structure

This paper contains 16 sections, 2 figures, 3 tables.

Figures (2)

  • Figure 1: Per-subject results from the winning hyperparameter sweep evaluated on chugh2021detection's dataset chugh2021detection
  • Figure 2: Success and failure cases across three eye trackers with different LED configurations. Images are from chugh2021detection's dataset chugh2021detection, OpenEDS2019 garbin2019openeds, and OpenEDS2020 palmero2020openeds2020. Blue circles are the detected glints while the green circles show template-projected positions based on the chosen candidate glints.