Table of Contents
Fetching ...
Paper

From Silicon to Spikes: System-Wide Efficiency Gains via Exact Event-Driven Training in Neuromorphic Computing

Abstract

Spiking neural networks (SNNs) promise orders-of-magnitude efficiency gains by communicating with sparse, event-driven spikes rather than dense numerical activations. However, most training pipelines either rely on surrogate-gradient approximations or require dense time-step simulations, both of which conflict with the memory, bandwidth, and scheduling constraints of neuromorphic hardware and blur precise spike timing. We introduce an analytical, event-driven learning framework that computes exact gradients for synaptic weights, programmable transmission delays, and adaptive firing thresholds, three orthogonal temporal controls that jointly shape SNN accuracy and robustness. By propagating error signals only at spike events and integrating subthreshold dynamics in closed form, the method eliminates the need to store membrane-potential traces and reduces on-chip memory traffic by up to 24x in our experiments. Across multiple sequential event-stream benchmarks, the framework improves accuracy by up to 7% over a strong surrogate-gradient baseline, while sharpening spike-timing precision and enhancing resilience to injected hardware noise. These findings indicate that aligning neuron dynamics and training dynamics with event-sparse execution can simultaneously improve functional performance and resource efficiency in neuromorphic systems.