Table of Contents
Fetching ...

Variational Neurons in Transformers for Language Modeling

Yves Ruffenach

Abstract

Transformers for language modeling usually rely on deterministic internal computation, with uncertainty expressed mainly at the output layer. We introduce variational neurons into Transformer feed-forward computation so that uncertainty becomes part of the internal computation itself. Concretely, we replace deterministic feed-forward units with local variational units based on EVE while preserving the overall Transformer backbone. We evaluate this design in compact next-token language-modeling settings. We compare deterministic and variational variants with both predictive and probabilistic criteria. Alongside negative log-likelihood, perplexity and accuracy, we analyze calibration, conditional variance, mutual information and latent-usage statistics. The resulting picture is clear. Variational neurons integrate stably into Transformers, preserve strong predictive performance and produce informative uncertainty signals. The experiments also show that task quality, useful depth and internal stability are distinct properties. These results establish variational Transformers as a practical form of uncertainty-aware language modeling. They show that Transformers can predict with an explicit internal structure of uncertainty, which supports stronger probabilistic evaluation and a more informative analysis of model behavior.

Variational Neurons in Transformers for Language Modeling

Abstract

Transformers for language modeling usually rely on deterministic internal computation, with uncertainty expressed mainly at the output layer. We introduce variational neurons into Transformer feed-forward computation so that uncertainty becomes part of the internal computation itself. Concretely, we replace deterministic feed-forward units with local variational units based on EVE while preserving the overall Transformer backbone. We evaluate this design in compact next-token language-modeling settings. We compare deterministic and variational variants with both predictive and probabilistic criteria. Alongside negative log-likelihood, perplexity and accuracy, we analyze calibration, conditional variance, mutual information and latent-usage statistics. The resulting picture is clear. Variational neurons integrate stably into Transformers, preserve strong predictive performance and produce informative uncertainty signals. The experiments also show that task quality, useful depth and internal stability are distinct properties. These results establish variational Transformers as a practical form of uncertainty-aware language modeling. They show that Transformers can predict with an explicit internal structure of uncertainty, which supports stronger probabilistic evaluation and a more informative analysis of model behavior.

Paper Structure

This paper contains 20 sections, 16 equations, 3 figures, 4 tables.

Figures (3)

  • Figure 1: Standard and variational Transformer blocks. The overall Transformer backbone is unchanged. Only the feed-forward computation is replaced by a variational block that infers a local latent distribution, samples a latent state and projects it to model space before the residual update.
  • Figure 2: Validation CE across epochs on the matched approximately 19,925-raw-example protocol. EVE improves steadily across the full 5-epoch run, whereas DET reaches its best point at epoch 3 and then rises.
  • Figure 3: Sampling-based epistemic metrics for EVE and the matched deterministic baseline. Mutual information, conditional Monte Carlo variance, top-1 Monte Carlo flip rate, and epistemic ratio are clearly non-zero for EVE, whereas they remain at zero or near-zero values for DET under repeated deterministic forward evaluation. The vertical axis is logarithmic; exact zeros for DET are displayed with a tiny visualization floor only for plotting.