Table of Contents
Fetching ...

BraiNCA: brain-inspired neural cellular automata and applications to morphogenesis and motor control

Léo Pio-Lopez, Benedikt Hartl, Michael Levin

Abstract

Most of the Neural Cellular Automata (NCAs) defined in the literature have a common theme: they are based on regular grids with a Moore neighborhood (one-hop neighbour). They do not take into account long-range connections and more complex topologies as we can find in the brain. In this paper, we introduce BraiNCA, a brain-inspired NCA with an attention layer, long-range connections and complex topology. BraiNCAs shows better results in terms of robustness and speed of learning on the two tasks compared to Vanilla NCAs establishing that incorporating attention-based message selection together with explicit long-range edges can yield more sample-efficient and damage-tolerant self-organization than purely local, grid-based update rules. These results support the hypothesis that, for tasks requiring distributed coordination over extended spatial and temporal scales, the choice of interaction topology and the ability to dynamically route information will impact the robustness and speed of learning of an NCA. More broadly, BraiNCA provides brain-inspired NCA formulation that preserves the decentralized local update principle while better reflecting non-local connectivity patterns, making it a promising substrate for studying collective computation under biologically-realistic network structure and evolving cognitive substrates.

BraiNCA: brain-inspired neural cellular automata and applications to morphogenesis and motor control

Abstract

Most of the Neural Cellular Automata (NCAs) defined in the literature have a common theme: they are based on regular grids with a Moore neighborhood (one-hop neighbour). They do not take into account long-range connections and more complex topologies as we can find in the brain. In this paper, we introduce BraiNCA, a brain-inspired NCA with an attention layer, long-range connections and complex topology. BraiNCAs shows better results in terms of robustness and speed of learning on the two tasks compared to Vanilla NCAs establishing that incorporating attention-based message selection together with explicit long-range edges can yield more sample-efficient and damage-tolerant self-organization than purely local, grid-based update rules. These results support the hypothesis that, for tasks requiring distributed coordination over extended spatial and temporal scales, the choice of interaction topology and the ability to dynamically route information will impact the robustness and speed of learning of an NCA. More broadly, BraiNCA provides brain-inspired NCA formulation that preserves the decentralized local update principle while better reflecting non-local connectivity patterns, making it a promising substrate for studying collective computation under biologically-realistic network structure and evolving cognitive substrates.

Paper Structure

This paper contains 19 sections, 11 equations, 6 figures.

Figures (6)

  • Figure 1: Schematic illustration of the BraiNCA model. Every cell $i$ not only integrates its local neighborhood $\mathcal{N}_i$ via attention, but also long-range signals from sparsely connected distant cells $j,k\in\mathcal{L}_i$ for recurrent state updates. Flexible topologies are allowed, departing from NCAs with regular, e.g., square grid-layouts.
  • Figure 2: Flow-diagram of the BraiNCA architecture. Each cell’s state is updated by aggregating information from both local and long-range neighborhoods through attention-based context encoding. The combined signals are processed via a recurrent update function (GRU + MLP), enabling spatially distributed coordination in morphogenesis experiments and sensory-motor control of RL agents.
  • Figure 3: Morphogenesis conditions. Left: target pattern and cell labeling for the 3×3 neighborhood condition. Middle: target pattern for the 5×5 neighborhood condition. Right: example of long-range connections used in long-range variants. We made it sparse to mimic scale-free networks connectivity as we can find in the brain.
  • Figure 4: LunarLander connectivity schematics for the four conditions. Vanilla: 16$\times$16 grid with quadrant action regions (NOOP/LEFT/MAIN/RIGHT). Vanilla+LR: same grid with additional long-range links under the selected long-range architecture. T-Shape: 24$\times$16 grid with four active 8$\times$8 zones in a T-shape. T-Shape+LR: same topology with patch-based cross-zone long-range messaging. Dots indicate individual cells in action regions (shaded blocks.
  • Figure 5: Long-range connections and size of neighborhood integration increase accuracy and speed of learning. A) Violin plots of the episodes to successes in the different conditions (3$\times$3 Vanilla, 3$\times$3 Long-Range, 5$\times$5 Vanilla, and 5$\times$5 Long-Range). B) Mean episodes to success in the different conditions. C) Relative speedup compared to the baseline (3$\times$3 Vanilla).
  • ...and 1 more figures