Table of Contents
Fetching ...

Entropy, Disagreement, and the Limits of Foundation Models in Genomics

Maxime Rochkoulets, Lovro Vrček, Mile Šikić

Abstract

Foundation models in genomics have shown mixed success compared to their counterparts in natural language processing. Yet, the reasons for their limited effectiveness remain poorly understood. In this work, we investigate the role of entropy as a fundamental factor limiting the capacities of such models to learn from their training data and develop foundational capabilities. We train ensembles of models on text and DNA sequences and analyze their predictions, static embeddings, and empirical Fisher information flow. We show that the high entropy of genomic sequences -- from the point of view of unseen token prediction -- leads to near-uniform output distributions, disagreement across models, and unstable static embeddings, even for models that are matched in architecture, training and data. We then demonstrate that models trained on DNA concentrate Fisher information in embedding layers, seemingly failing to exploit inter-token relationships. Our results suggest that self-supervised training from sequences alone may not be applicable to genomic data, calling into question the assumptions underlying current methodologies for training genomic foundation models.

Entropy, Disagreement, and the Limits of Foundation Models in Genomics

Abstract

Foundation models in genomics have shown mixed success compared to their counterparts in natural language processing. Yet, the reasons for their limited effectiveness remain poorly understood. In this work, we investigate the role of entropy as a fundamental factor limiting the capacities of such models to learn from their training data and develop foundational capabilities. We train ensembles of models on text and DNA sequences and analyze their predictions, static embeddings, and empirical Fisher information flow. We show that the high entropy of genomic sequences -- from the point of view of unseen token prediction -- leads to near-uniform output distributions, disagreement across models, and unstable static embeddings, even for models that are matched in architecture, training and data. We then demonstrate that models trained on DNA concentrate Fisher information in embedding layers, seemingly failing to exploit inter-token relationships. Our results suggest that self-supervised training from sequences alone may not be applicable to genomic data, calling into question the assumptions underlying current methodologies for training genomic foundation models.

Paper Structure

This paper contains 13 sections, 2 equations, 3 figures, 3 tables.

Figures (3)

  • Figure 1: Jensen-Shannon distance $\mathbb{E}_{{\textnormal{x}} \sim \mathcal{D}}\left[d_{\mathrm{JS}}(P_i, P_j)\right]$ between models of the same ensemble as a function of top-$p$ mass kept. Values computed over 100000.0 samples unseen during training.
  • Figure 2: Normalized layer-wise aggregation of empirical Fisher information in text and DNA models. Estimate computed over 100000.0 samples unseen during training.
  • Figure 3: Detailed normalized empirical Fisher information content per model, for each layer. Estimate computed over 100000.0 samples unseen during training.