Table of Contents
Fetching ...

LPC-SM: Local Predictive Coding and Sparse Memory for Long-Context Language Modeling

Keqin Xie

Abstract

Most current long-context language models still rely on attention to handle both local interaction and long-range state, which leaves relatively little room to test alternative decompositions of sequence modeling. We propose LPC-SM, a hybrid autoregressive architecture that separates local attention, persistent memory, predictive correction, and run-time control within the same block, and we use Orthogonal Novelty Transport (ONT) to govern slow-memory writes. We evaluate a 158M-parameter model in three stages spanning base language modeling, mathematical continuation, and 4096-token continuation. Removing mHC raises the Stage-A final LM loss from 12.630 to 15.127, while adaptive sparse control improves the Stage-B final LM loss from 12.137 to 10.787 relative to a matched fixed-ratio continuation. The full route remains stable at sequence length 4096, where Stage C ends with final LM loss 11.582 and improves the delayed-identifier diagnostic from 14.396 to 12.031 in key cross-entropy. Taken together, these results show that long-context autoregressive modeling can be organized around a broader division of labor than attention alone.

LPC-SM: Local Predictive Coding and Sparse Memory for Long-Context Language Modeling

Abstract

Most current long-context language models still rely on attention to handle both local interaction and long-range state, which leaves relatively little room to test alternative decompositions of sequence modeling. We propose LPC-SM, a hybrid autoregressive architecture that separates local attention, persistent memory, predictive correction, and run-time control within the same block, and we use Orthogonal Novelty Transport (ONT) to govern slow-memory writes. We evaluate a 158M-parameter model in three stages spanning base language modeling, mathematical continuation, and 4096-token continuation. Removing mHC raises the Stage-A final LM loss from 12.630 to 15.127, while adaptive sparse control improves the Stage-B final LM loss from 12.137 to 10.787 relative to a matched fixed-ratio continuation. The full route remains stable at sequence length 4096, where Stage C ends with final LM loss 11.582 and improves the delayed-identifier diagnostic from 14.396 to 12.031 in key cross-entropy. Taken together, these results show that long-context autoregressive modeling can be organized around a broader division of labor than attention alone.

Paper Structure

This paper contains 18 sections, 6 theorems, 47 equations, 3 figures, 4 tables.

Key Result

Proposition A.3

For every $c, m \in E$ and $\alpha \in \mathbb{R}$, and Moreover, $\blacktriangleleft$$\blacktriangleleft$

Figures (3)

  • Figure 1: Overall LPC-SM stack.
  • Figure 2: A single LPC-SM block.
  • Figure 3: Orthogonal Novelty Transport for slow-memory writes.

Theorems & Definitions (14)

  • Definition A.1: ONT projection, novelty, and transport
  • Definition A.2: Comparison target and feasible set
  • Proposition A.3: Basic decomposition and aligned gap
  • proof
  • Proposition A.4: Feasibility
  • proof
  • Theorem A.5: ONT is the constrained minimizer
  • proof
  • Corollary A.6: Uniqueness
  • proof
  • ...and 4 more