Table of Contents
Fetching ...

On associative neural networks for sparse patterns with huge capacities

Matthias Löwe, Franck Vermet

Abstract

Generalized Hopfield models with higher-order or exponential interaction terms are known to have substantially larger storage capacities than the classical quadratic model. On the other hand, associative memories for sparse patterns, such as the Willshaw and Amari models, already outperform the classical Hopfield model in the sparse regime. In this paper we combine these two mechanisms. We introduce higher-order versions of sparse associative memory models and study their storage capacities. For fixed interaction order $n$, we obtain storage capacities of polynomial order in the system size. When the interaction order is allowed to grow logarithmically with the number of neurons, this yields super-polynomial capacities. We also discuss an analogue in the Gripon--Berrou architecture which was formulated for non-sparse messages (see \cite{griponc}). Our results show that the capacity increase caused by higher-order interactions persists in the sparse setting, although the precise storage scale depends on the underlying architecture.

On associative neural networks for sparse patterns with huge capacities

Abstract

Generalized Hopfield models with higher-order or exponential interaction terms are known to have substantially larger storage capacities than the classical quadratic model. On the other hand, associative memories for sparse patterns, such as the Willshaw and Amari models, already outperform the classical Hopfield model in the sparse regime. In this paper we combine these two mechanisms. We introduce higher-order versions of sparse associative memory models and study their storage capacities. For fixed interaction order , we obtain storage capacities of polynomial order in the system size. When the interaction order is allowed to grow logarithmically with the number of neurons, this yields super-polynomial capacities. We also discuss an analogue in the Gripon--Berrou architecture which was formulated for non-sparse messages (see \cite{griponc}). Our results show that the capacity increase caused by higher-order interactions persists in the sparse setting, although the precise storage scale depends on the underlying architecture.

Paper Structure

This paper contains 7 sections, 6 theorems, 169 equations.

Key Result

Theorem 3.1

Consider Amari's model with fixed interaction order $n$, defined by multdyn and multhebb, and choose for some $0<\gamma<1$. Then there exists $\alpha_0=\alpha_0(n,\gamma)>0$ such that for every $\alpha<\alpha_0$, the choice satisfies for every fixed stored pattern $\xi^\mu$.

Theorems & Definitions (15)

  • Theorem 3.1
  • Corollary 3.2
  • Theorem 3.3
  • Theorem 3.4
  • Remark 3.5
  • Corollary 3.6
  • Theorem 3.7
  • proof : Proof of Theorem \ref{['theo:amari_fixed_n']}
  • proof : Proof of Corollary \ref{['cor:willshaw_fixed_n']}
  • proof : Proof of Theorem \ref{['thm:GB_fixed_n']}
  • ...and 5 more