Table of Contents
Fetching ...

Parallelized Hierarchical Connectome: A Spatiotemporal Recurrent Framework for Spiking State-Space Models

Po-Han Chiang

Abstract

This work presents the Parallelized Hierarchical Connectome (PHC), a general framework that upgrades temporal-only State-Space Models (SSMs) into spatiotemporal recurrent networks. Conventional SSMs achieve high-speed sequence processing through parallel scans, yet are limited to temporal recurrence without lateral or feedback interactions within a single timestep. PHC maps the diagonal SSM core to a shared Neuron Layer and inter-neuronal communication to a shared Synapse Layer, where neurons are partitioned into hierarchical regions governed by the connectome topology. A Multi-Transmission Loop enables intra-slice spatial recurrence, allowing signals to propagate across the hierarchical connectome within each temporal window while preserving O(logT) parallelism. This framework enables integration of neuro-physical priors typically intractable for standard SSMs, including adaptive leaky integrate-and-fire dynamics, Dale's Law, short-term plasticity, and reward-modulated spike-timing-dependent plasticity. The framework is instantiated as PHCSSM, the first model to unify recurrent spiking neural network dynamics with diagonal SSM parallelism while enforcing all five biological constraints and learnable lateral connections within a fully parallelizable training pipeline. Empirical results on physiological benchmarks from the UEA multivariate time-series archive demonstrate that PHCSSM achieves performance competitive with state-of-the-art SSMs while reducing parameter complexity from Theta(D^2 L) for L-layer stacked architectures to Theta(D^2). These findings suggest that biologically grounded inductive biases offer a principled route to parameter-efficient sequence modeling, opening diagonal SSMs to spatiotemporal recurrence and enabling fully parallelizable recurrent spiking neural network training.

Parallelized Hierarchical Connectome: A Spatiotemporal Recurrent Framework for Spiking State-Space Models

Abstract

This work presents the Parallelized Hierarchical Connectome (PHC), a general framework that upgrades temporal-only State-Space Models (SSMs) into spatiotemporal recurrent networks. Conventional SSMs achieve high-speed sequence processing through parallel scans, yet are limited to temporal recurrence without lateral or feedback interactions within a single timestep. PHC maps the diagonal SSM core to a shared Neuron Layer and inter-neuronal communication to a shared Synapse Layer, where neurons are partitioned into hierarchical regions governed by the connectome topology. A Multi-Transmission Loop enables intra-slice spatial recurrence, allowing signals to propagate across the hierarchical connectome within each temporal window while preserving O(logT) parallelism. This framework enables integration of neuro-physical priors typically intractable for standard SSMs, including adaptive leaky integrate-and-fire dynamics, Dale's Law, short-term plasticity, and reward-modulated spike-timing-dependent plasticity. The framework is instantiated as PHCSSM, the first model to unify recurrent spiking neural network dynamics with diagonal SSM parallelism while enforcing all five biological constraints and learnable lateral connections within a fully parallelizable training pipeline. Empirical results on physiological benchmarks from the UEA multivariate time-series archive demonstrate that PHCSSM achieves performance competitive with state-of-the-art SSMs while reducing parameter complexity from Theta(D^2 L) for L-layer stacked architectures to Theta(D^2). These findings suggest that biologically grounded inductive biases offer a principled route to parameter-efficient sequence modeling, opening diagonal SSMs to spatiotemporal recurrence and enabling fully parallelizable recurrent spiking neural network training.

Paper Structure

This paper contains 20 sections, 19 equations, 3 figures, 8 tables.

Figures (3)

  • Figure 1: Comparison of conventional stacked SSMs and the Parallelized Hierarchical Connectome (PHC) framework. (A) Conventional SSMs process sequences through stacked layers connected via unidirectional feedforward MLPs, corresponding to $L$ independent diagonal state-transition matrices and $L$ dense inter-layer weight matrices, supporting only temporal recurrence with no lateral or feedback interactions. (B) The PHC framework partitions neurons into hierarchical regions (R0, R1) with biologically constrained E/I populations. Within each region, neurons interact via full local microcircuits; across regions, both feedforward (R0$\to$R1) and feedback (R1$\to$R0) projections are supported through the Hierarchical Connectome Matrix ($W_{\mathrm{syn}} \odot M_{\mathrm{topo}}$), illustrated here under the bidirectional configuration.
  • Figure 2: Structural isomorphism between stacked SSMs and the PHC framework. Left: A conventional $L$-layer stacked SSM, where $L$ independent diagonal state-transition matrices (Layer 0, Layer 1) are interleaved with $L$ independent dense MLPs (MLP1, MLP2), each with non-shared parameters, forming a one-direction forward connection. Right: The PHC framework collapses this vertical stack into a single spatial plane. Each diagonal layer maps to a region (R0, R1) within a shared Neuron Layer whose parameters are reused across all regions (Parallelized State Update). Each inter-layer MLP maps to a sub-block of the Hierarchical Connectome Matrix, which consolidates all inter-neuronal communication into a single shared Synapse Layer with biologically constrained connectivity (Dale's Law, topology mask).
  • Figure 3: Detailed signal flow of the PHCSSM forward pass. The input sequence is projected via a linear encoder and gated by an input mask restricting sensory drive to designated populations. Within the Multi-Transmission Loop, the NL performs three sequential diagonal parallel scans (membrane potential, adaptive threshold, and refractory suppression) followed by pointwise spike generation (ALIF). The SL applies a synaptic delay buffer, modulates spikes via Tsodyks--Markram STP (two additional parallel scans), and transmits the result through the biologically constrained weight matrix $W_{\mathrm{struct}}$ ($= W_{\mathrm{syn}} \odot M_{\mathrm{topo}}$). Convergence is assessed via the Cauchy criterion after each transmission; upon exit, R-STDP updates synaptic weights using genuine binary spike timing. The output membrane voltage is projected to logits via a linear readout.