Emerging NeoHebbian Dynamics in Forward-Forward Learning: Implications for Neuromorphic Computing
Erik B. Terres-Escudero, Javier Del Ser, Pablo García-Bringas
TL;DR
This paper demonstrates that Forward-Forward Algorithm (FFA) with a squared Euclidean goodness $G(m{ extell})$ is mathematically equivalent to a modulated neoHebbian learning rule, and validates this equivalence by comparing analog FFA with a Hebbian adaptation in spiking networks on MNIST. The results show that Hebbian FFA achieves competitive accuracy and yields similar sparse, highly separable latent spaces as the analog version, bridging biological learning rules with contemporary training approaches. The findings support the potential to deploy FFA-derived Hebbian learning on neuromorphic hardware, offering energy efficiency and fast, layer-local training, while suggesting future work on tooling and latent-space geometry for interpretability and instance-based explanations.
Abstract
Advances in neural computation have predominantly relied on the gradient backpropagation algorithm (BP). However, the recent shift towards non-stationary data modeling has highlighted the limitations of this heuristic, exposing that its adaptation capabilities are far from those seen in biological brains. Unlike BP, where weight updates are computed through a reverse error propagation path, Hebbian learning dynamics provide synaptic updates using only information within the layer itself. This has spurred interest in biologically plausible learning algorithms, hypothesized to overcome BP's shortcomings. In this context, Hinton recently introduced the Forward-Forward Algorithm (FFA), which employs local learning rules for each layer and has empirically proven its efficacy in multiple data modeling tasks. In this work we argue that when employing a squared Euclidean norm as a goodness function driving the local learning, the resulting FFA is equivalent to a neo-Hebbian Learning Rule. To verify this result, we compare the training behavior of FFA in analog networks with its Hebbian adaptation in spiking neural networks. Our experiments demonstrate that both versions of FFA produce similar accuracy and latent distributions. The findings herein reported provide empirical evidence linking biological learning rules with currently used training algorithms, thus paving the way towards extrapolating the positive outcomes from FFA to Hebbian learning rules. Simultaneously, our results imply that analog networks trained under FFA could be directly applied to neuromorphic computing, leading to reduced energy usage and increased computational speed.
