Table of Contents
Fetching ...

Reconstructing Spiking Neural Networks Using a Single Neuron with Autapses

Wuque Cai, Hongze Sun, Quan Tang, Shifeng Mao, Zhenxing Wang, Jiayi He, Duo Chen, Dezhong Yao, Daqing Guo

Abstract

Spiking neural networks (SNNs) are promising for neuromorphic computing, but high-performing models still rely on dense multilayer architectures with substantial communication and state-storage costs. Inspired by autapses, we propose time-delayed autapse SNN (TDA-SNN), a framework that reconstructs SNNs with a single leaky integrate-and-fire neuron and a prototype-learning-based training strategy. By reorganizing internal temporal states, TDA-SNN can realize reservoir, multilayer perceptron, and convolution-like spiking architectures within a unified framework. Experiments on sequential, event-based, and image benchmarks show competitive performance in reservoir and MLP settings, while convolutional results reveal a clear space--time trade-off. Compared with standard SNNs, TDA-SNN greatly reduces neuron count and state memory while increasing per-neuron information capacity, at the cost of additional temporal latency in extreme single-neuron settings. These findings highlight the potential of temporally multiplexed single-neuron models as compact computational units for brain-inspired computing.

Reconstructing Spiking Neural Networks Using a Single Neuron with Autapses

Abstract

Spiking neural networks (SNNs) are promising for neuromorphic computing, but high-performing models still rely on dense multilayer architectures with substantial communication and state-storage costs. Inspired by autapses, we propose time-delayed autapse SNN (TDA-SNN), a framework that reconstructs SNNs with a single leaky integrate-and-fire neuron and a prototype-learning-based training strategy. By reorganizing internal temporal states, TDA-SNN can realize reservoir, multilayer perceptron, and convolution-like spiking architectures within a unified framework. Experiments on sequential, event-based, and image benchmarks show competitive performance in reservoir and MLP settings, while convolutional results reveal a clear space--time trade-off. Compared with standard SNNs, TDA-SNN greatly reduces neuron count and state memory while increasing per-neuron information capacity, at the cost of additional temporal latency in extreme single-neuron settings. These findings highlight the potential of temporally multiplexed single-neuron models as compact computational units for brain-inspired computing.

Paper Structure

This paper contains 38 sections, 9 equations, 16 figures, 4 tables.

Figures (16)

  • Figure 1: Evolution and folding principles of autapses. (a) An autapse in biological neurons and its evolutionary unfolding. (b) Distribution of evolutionary time in the TDA-LIF model and its mapping to reservoir computation.
  • Figure 2: Division of the evolutionary time in the TDA-LIF model and its mapping to an MLP
  • Figure 3: Convolutional layer with the TDA-LIF model. (a) Mapping 10 nodes to a 3×3 convolution. (b) Equivalence of TDA-LIF to a single-channel convolutional layer.
  • Figure 4: Forward (a) and backward (b) processes in the TDA-SNN model.
  • Figure 5: Comparison of STD-SNN and TDA-SNN performance under the RC structure with different reservoir sizes on (a) DEAP and (b) SHD datasets.
  • ...and 11 more figures