Table of Contents
Fetching ...

Topological Sensitivity in Connectome-Constrained Neural Networks

Nalin Dhiman

Abstract

Connectome-constrained neural networks are often evaluated against sparse random controls and then interpreted as evidence that biological graph topology improves learning efficiency. We revisit that claim in a controlled flyvis-based study using a Drosophila connectome, a naive self-loop-matched random graph, and a degree-preserving rewired null. Under weak controls, in which both models were recovered from a connectome-trained checkpoint and the null matched only global graph counts, the connectome appeared substantially better in early loss, mean activity, and runtime. That picture changed under stricter controls. Training both graphs from a shared random initialization removed the early loss advantage, and replacing the naive null by a degree-preserving null removed the apparent activity advantage. A five-sample degree-preserving ensemble and a pre-training activity-scale diagnostic further strengthened this revised interpretation. We also report a descriptive mechanism analysis of the earlier weak-control comparison, but we treat it as behavioral characterization rather than proof of causal superiority. We show that previously reported topology advantages in connectome-constrained neural networks can arise from initialization and null-model confounds, and largely disappear under fair from-scratch initialization and degree-preserving controls.

Topological Sensitivity in Connectome-Constrained Neural Networks

Abstract

Connectome-constrained neural networks are often evaluated against sparse random controls and then interpreted as evidence that biological graph topology improves learning efficiency. We revisit that claim in a controlled flyvis-based study using a Drosophila connectome, a naive self-loop-matched random graph, and a degree-preserving rewired null. Under weak controls, in which both models were recovered from a connectome-trained checkpoint and the null matched only global graph counts, the connectome appeared substantially better in early loss, mean activity, and runtime. That picture changed under stricter controls. Training both graphs from a shared random initialization removed the early loss advantage, and replacing the naive null by a degree-preserving null removed the apparent activity advantage. A five-sample degree-preserving ensemble and a pre-training activity-scale diagnostic further strengthened this revised interpretation. We also report a descriptive mechanism analysis of the earlier weak-control comparison, but we treat it as behavioral characterization rather than proof of causal superiority. We show that previously reported topology advantages in connectome-constrained neural networks can arise from initialization and null-model confounds, and largely disappear under fair from-scratch initialization and degree-preserving controls.

Paper Structure

This paper contains 44 sections, 10 equations, 6 figures, 3 tables.

Figures (6)

  • Figure 1: Control ladder used in the revision study. The original observation compared a connectome graph to a self-loop-matched random graph after checkpoint recovery. The corrected analysis then removed checkpoint initialization and strengthened the null model by preserving the directed degree sequence. The substantive scientific conclusion changes across these control levels.
  • Figure 2: Architecture used in the study. A MovingEdge stimulus drives a graph-constrained recurrent flyvis network whose connectivity mask is given by either the empirical connectome, a naive random graph, or a degree-preserving random graph. A linear decoder reads pooled central-cell activity and predicts the 2D motion direction target. The key comparison in the paper is not between different decoders or optimizers, but between different graph masks under matched nodes, edges, self-loops, and parameterization.
  • Figure 3: Matched-step training curves under checkpoint-based initialization and a naive random null. The connectome appears to outperform the control across all three metrics.
  • Figure 4: Matched-step summary at 5 and 10 steps under weak controls. All three metrics favor the connectome prior to applying stricter controls.
  • Figure 5: Degree-preserving ensemble variability at 5 matched steps. Each point represents one sample-seed comparison. Loss differences remain near zero, activity differences are slightly negative, and runtime differences are modest.
  • ...and 1 more figures