Table of Contents
Fetching ...

Deep Networks Favor Simple Data

Weyl Lu, Chenjie Hao, Yubei Chen

Abstract

Estimated density is often interpreted as indicating how typical a sample is under a model. Yet deep models trained on one dataset can assign higher density to simpler out-of-distribution (OOD) data than to in-distribution test data. We refer to this behavior as the OOD anomaly. Prior work typically studies this phenomenon within a single architecture, detector, or benchmark, implicitly assuming certain canonical densities. We instead separate the trained network from the density estimator built from its representations or outputs. We introduce two estimators: Jacobian-based estimators and autoregressive self-estimators, making density analysis applicable to a wide range of models. Applying this perspective to a range of models, including iGPT, PixelCNN++, Glow, score-based diffusion models, DINOv2, and I-JEPA, we find the same striking regularity that goes beyond the OOD anomaly: lower-complexity samples receive higher estimated density, while higher-complexity samples receive lower estimated density. This ordering appears within a test set and across OOD pairs such as CIFAR-10 and SVHN, and remains highly consistent across independently trained models. To quantify these orderings, we introduce Spearman rank correlation and find striking agreement both across models and with external complexity metrics. Even when trained only on the lowest-density (most complex) samples - or even a single such sample - the resulting models still rank simpler images as higher density. These observations lead us beyond the original OOD anomaly to a more general conclusion: deep networks consistently favor simple data. Our goal is not to close this question, but to define and visualize it more clearly. We broaden its empirical scope and show that it appears across architectures, objectives, and density estimators.

Deep Networks Favor Simple Data

Abstract

Estimated density is often interpreted as indicating how typical a sample is under a model. Yet deep models trained on one dataset can assign higher density to simpler out-of-distribution (OOD) data than to in-distribution test data. We refer to this behavior as the OOD anomaly. Prior work typically studies this phenomenon within a single architecture, detector, or benchmark, implicitly assuming certain canonical densities. We instead separate the trained network from the density estimator built from its representations or outputs. We introduce two estimators: Jacobian-based estimators and autoregressive self-estimators, making density analysis applicable to a wide range of models. Applying this perspective to a range of models, including iGPT, PixelCNN++, Glow, score-based diffusion models, DINOv2, and I-JEPA, we find the same striking regularity that goes beyond the OOD anomaly: lower-complexity samples receive higher estimated density, while higher-complexity samples receive lower estimated density. This ordering appears within a test set and across OOD pairs such as CIFAR-10 and SVHN, and remains highly consistent across independently trained models. To quantify these orderings, we introduce Spearman rank correlation and find striking agreement both across models and with external complexity metrics. Even when trained only on the lowest-density (most complex) samples - or even a single such sample - the resulting models still rank simpler images as higher density. These observations lead us beyond the original OOD anomaly to a more general conclusion: deep networks consistently favor simple data. Our goal is not to close this question, but to define and visualize it more clearly. We broaden its empirical scope and show that it appears across architectures, objectives, and density estimators.

Paper Structure

This paper contains 16 sections, 7 equations, 8 figures.

Figures (8)

  • Figure 1: Density rankings on the CIFAR-10 test set. For each model, CIFAR-10 test images are sorted by estimated density (high → low) and visualized through stratified samples along this ranking. Top: base models trained on the full training set. Middle (LDT10): models retrained on the lowest-density training subset (the lowest 10% of training samples). Bottom (LDT1): models trained on a single lowest-density example. Across all settings the ranking consistently progresses from simple to complex images (even under single-sample training). All base models identify the same training image (CIFAR-10 id 29920) as the lowest-density sample; the corresponding LDT10 subset examples and the LDT1 example are shown on the right bottom.
  • Figure 2: Spearman correlations between rankings induced by the base models and by external complexity proxies. The lower triangle is shown. Positive values mean two methods rank CIFAR-10 images in a similar rank from simple / high-density to complex / low-density. The row / column labeled "Glow+JPEG complexity" is the JPEG-based correction inspired by Serra et al. serra2020input.
  • Figure 3: Spearman correlations between rankings induced by different training settings within each architecture family. Base uses the full CIFAR-10 training set, LDT10 uses only the lowest-density 10% of the training set, LDT1 uses only the single lowest-density image, and UT is the untrained network. Each LDT setting is shown at two widely separated checkpoints to reduce the chance of reporting a transient undertrained state.
  • Figure 4: PixelCNN++ after training on a single lowest-density image. Unlike iGPT and Glow, the single-sample PixelCNN++ ranking no longer follows the global simplicity ranking from the base model. The failure is already visible quantitatively in the middle panel of Fig. \ref{['fig:ldt']} and is shown qualitatively here.
  • Figure 5: DINOv2 on bicubically upsampled CIFAR-10. Compared with the other models, the density bins display a much weaker monotone progression from simple to complex. This is consistent with the weak DINOv2 row / column in Fig. \ref{['fig:basecorr']}.
  • ...and 3 more figures