Table of Contents
Fetching ...

Associative Memory System via Threshold Linear Networks

Qin He, Jing Shuang Li

Abstract

Humans learn and form memories in stochastic environments. Auto-associative memory systems model these processes by storing patterns and later recovering them from corrupted versions. Here, memories are learned by associating each pattern with an attractor in a latent space. After learning, when (possibly corrupted) patterns are presented to the system, latent dynamics facilitate retrieval of the appropriate uncorrupted pattern. In this work, we propose a novel online auto-associative memory system. In contrast to existing works, our system supports sequential memory formation and provides formal guarantees of robust memory retrieval via region-of-attraction analysis. We use a threshold-linear network as latent space dynamics in combination with an encoder, decoder, and controller. We show in simulation that the memory system successfully reconstructs patterns from corrupted inputs.

Associative Memory System via Threshold Linear Networks

Abstract

Humans learn and form memories in stochastic environments. Auto-associative memory systems model these processes by storing patterns and later recovering them from corrupted versions. Here, memories are learned by associating each pattern with an attractor in a latent space. After learning, when (possibly corrupted) patterns are presented to the system, latent dynamics facilitate retrieval of the appropriate uncorrupted pattern. In this work, we propose a novel online auto-associative memory system. In contrast to existing works, our system supports sequential memory formation and provides formal guarantees of robust memory retrieval via region-of-attraction analysis. We use a threshold-linear network as latent space dynamics in combination with an encoder, decoder, and controller. We show in simulation that the memory system successfully reconstructs patterns from corrupted inputs.

Paper Structure

This paper contains 12 sections, 10 theorems, 58 equations, 4 figures, 1 algorithm.

Key Result

Proposition 1

For CSTLNs, the support of an equilibrium $x^{e}$ coincides with the support of the cell containing it.

Figures (4)

  • Figure 3: Proposed auto-associative memory system. A. During learning, a new pattern is presented to the model, and a switching signal activates the controller to induce attractor switch and form new mappings (memories) in the encoder and decoder. B. Inference phase. The encoder maps a noisy pattern through the learned mappings to a target state. The feedback controller drives the latent dynamics toward the target state and is then turned off so the TLN settles into the corresponding attractor. The decoder then maps the attractor to the corresponding uncorrupted pattern.
  • Figure 4: Local sector bound for shifted system.
  • Figure 5: Illustrative example of a forward invariant set in 4D CSTLN projected onto 2D via PCA. The forward invariant set (contained by the red dashed line) contains two ROAs corresponding to two different attractors. Two ROAs are separated by the black dashed line.
  • Figure 6: Simulation of 7-dimensional TLN-based memory system and noise robustness analysis on the MNIST dataset. A. Trajectories visualized using projection into 2D space. Dashed lines show trajectories during learning; solid lines show trajectories during inference. The background colors show the true ROA from simulations. B. TLN firing rates over time. The black dashed line marks the end of the second inference pattern (image with number 8). C. The system reconstructs the noisy input pattern. D. Comparison of input-noise robustness for the two methods over 108 encoders and decoders corresponding to different learning sequences randomly sampled from MNIST. E. Empirical simulations show that the system begins to fail to converge to the correct attractor when the noise level exceeds approximately twice the bound found by the LP method, where the bound is taken as the median value in panel D.

Theorems & Definitions (23)

  • Definition 1
  • Definition 2
  • Definition 3
  • Proposition 1
  • proof
  • Definition 4
  • Lemma 1
  • proof
  • Lemma 2: Triple-support cell saddle
  • proof
  • ...and 13 more