Table of Contents
Fetching ...

Emerging NeoHebbian Dynamics in Forward-Forward Learning: Implications for Neuromorphic Computing

Erik B. Terres-Escudero, Javier Del Ser, Pablo García-Bringas

TL;DR

This paper demonstrates that Forward-Forward Algorithm (FFA) with a squared Euclidean goodness $G(m{ extell})$ is mathematically equivalent to a modulated neoHebbian learning rule, and validates this equivalence by comparing analog FFA with a Hebbian adaptation in spiking networks on MNIST. The results show that Hebbian FFA achieves competitive accuracy and yields similar sparse, highly separable latent spaces as the analog version, bridging biological learning rules with contemporary training approaches. The findings support the potential to deploy FFA-derived Hebbian learning on neuromorphic hardware, offering energy efficiency and fast, layer-local training, while suggesting future work on tooling and latent-space geometry for interpretability and instance-based explanations.

Abstract

Advances in neural computation have predominantly relied on the gradient backpropagation algorithm (BP). However, the recent shift towards non-stationary data modeling has highlighted the limitations of this heuristic, exposing that its adaptation capabilities are far from those seen in biological brains. Unlike BP, where weight updates are computed through a reverse error propagation path, Hebbian learning dynamics provide synaptic updates using only information within the layer itself. This has spurred interest in biologically plausible learning algorithms, hypothesized to overcome BP's shortcomings. In this context, Hinton recently introduced the Forward-Forward Algorithm (FFA), which employs local learning rules for each layer and has empirically proven its efficacy in multiple data modeling tasks. In this work we argue that when employing a squared Euclidean norm as a goodness function driving the local learning, the resulting FFA is equivalent to a neo-Hebbian Learning Rule. To verify this result, we compare the training behavior of FFA in analog networks with its Hebbian adaptation in spiking neural networks. Our experiments demonstrate that both versions of FFA produce similar accuracy and latent distributions. The findings herein reported provide empirical evidence linking biological learning rules with currently used training algorithms, thus paving the way towards extrapolating the positive outcomes from FFA to Hebbian learning rules. Simultaneously, our results imply that analog networks trained under FFA could be directly applied to neuromorphic computing, leading to reduced energy usage and increased computational speed.

Emerging NeoHebbian Dynamics in Forward-Forward Learning: Implications for Neuromorphic Computing

TL;DR

This paper demonstrates that Forward-Forward Algorithm (FFA) with a squared Euclidean goodness is mathematically equivalent to a modulated neoHebbian learning rule, and validates this equivalence by comparing analog FFA with a Hebbian adaptation in spiking networks on MNIST. The results show that Hebbian FFA achieves competitive accuracy and yields similar sparse, highly separable latent spaces as the analog version, bridging biological learning rules with contemporary training approaches. The findings support the potential to deploy FFA-derived Hebbian learning on neuromorphic hardware, offering energy efficiency and fast, layer-local training, while suggesting future work on tooling and latent-space geometry for interpretability and instance-based explanations.

Abstract

Advances in neural computation have predominantly relied on the gradient backpropagation algorithm (BP). However, the recent shift towards non-stationary data modeling has highlighted the limitations of this heuristic, exposing that its adaptation capabilities are far from those seen in biological brains. Unlike BP, where weight updates are computed through a reverse error propagation path, Hebbian learning dynamics provide synaptic updates using only information within the layer itself. This has spurred interest in biologically plausible learning algorithms, hypothesized to overcome BP's shortcomings. In this context, Hinton recently introduced the Forward-Forward Algorithm (FFA), which employs local learning rules for each layer and has empirically proven its efficacy in multiple data modeling tasks. In this work we argue that when employing a squared Euclidean norm as a goodness function driving the local learning, the resulting FFA is equivalent to a neo-Hebbian Learning Rule. To verify this result, we compare the training behavior of FFA in analog networks with its Hebbian adaptation in spiking neural networks. Our experiments demonstrate that both versions of FFA produce similar accuracy and latent distributions. The findings herein reported provide empirical evidence linking biological learning rules with currently used training algorithms, thus paving the way towards extrapolating the positive outcomes from FFA to Hebbian learning rules. Simultaneously, our results imply that analog networks trained under FFA could be directly applied to neuromorphic computing, leading to reduced energy usage and increased computational speed.

Paper Structure

This paper contains 13 sections, 8 equations, 2 figures, 2 tables.

Figures (2)

  • Figure 1: Latent vectors obtained from the test MNIST dataset for all the trained models from RQ1. Each row represents a distinct latent vector. White areas indicate lack of activity, while darker areas indicate greater activity. The Hoyer Index for each model is presented below the latent vectors.
  • Figure 2: T-SNE projection of the latent space obtained on the test MNIST dataset by all the trained models considered in the experiments for RQ1. Each point represents a projected latent vector, whereas each color represents a different class label. The Separability Index is detailed below each nested plot.