Stochastic Thermodynamics for Autoregressive Generative Models: A Non-Markovian Perspective
Takahiro Sagawa
Abstract
Autoregressive generative models -- including Transformers, recurrent neural networks, classical Kalman filters, state space models, and Mamba -- all generate sequences by sampling each output from a deterministic summary of the past, producing genuinely non-Markovian observed processes. We develop a general theoretical framework based on stochastic thermodynamics for this class of architectures and introduce the entropy production, which can be efficiently estimated from sampled trajectories without exponential sampling cost despite the non-Markovian nature of the observed dynamics. As a proof-of-concept experiment for a large language model (LLM), we evaluate the token-level and sentence-level entropy production for a pre-trained Transformer-based model, GPT-2. We also demonstrate the framework in the linear Gaussian case, where the model reduces to the Kalman innovation representation and the entropy production admits an analytical expression. We further show that the entropy production decomposes exactly into non-negative per-step contributions in terms of retrospective inference, and each of those terms further splits into information-theoretically meaningful terms: a compression loss and a model mismatch. Our results establish a bridge between stochastic thermodynamics and modern generative models, and provide a starting point for quantifying irreversibility in a broad class of highly non-Markovian processes such as LLMs.
