Table of Contents
Fetching ...

LACON: Training Text-to-Image Model from Uncurated Data

Zhiyang Liang, Ziyu Wan, Hongyu Liu, Dong Chen, Qiu Shen, Hao Zhu, Dongdong Chen

Abstract

The success of modern text-to-image generation is largely attributed to massive, high-quality datasets. Currently, these datasets are curated through a filter-first paradigm that aggressively discards low-quality raw data based on the assumption that it is detrimental to model performance. Is the discarded bad data truly useless, or does it hold untapped potential? In this work, we critically re-examine this question. We propose LACON (Labeling-and-Conditioning), a novel training framework that exploits the underlying uncurated data distribution. Instead of filtering, LACON re-purposes quality signals, such as aesthetic scores and watermark probabilities as explicit, quantitative condition labels. The generative model is then trained to learn the full spectrum of data quality, from bad to good. By learning the explicit boundary between high- and low-quality content, LACON achieves superior generation quality compared to baselines trained only on filtered data using the same compute budget, proving the significant value of uncurated data.

LACON: Training Text-to-Image Model from Uncurated Data

Abstract

The success of modern text-to-image generation is largely attributed to massive, high-quality datasets. Currently, these datasets are curated through a filter-first paradigm that aggressively discards low-quality raw data based on the assumption that it is detrimental to model performance. Is the discarded bad data truly useless, or does it hold untapped potential? In this work, we critically re-examine this question. We propose LACON (Labeling-and-Conditioning), a novel training framework that exploits the underlying uncurated data distribution. Instead of filtering, LACON re-purposes quality signals, such as aesthetic scores and watermark probabilities as explicit, quantitative condition labels. The generative model is then trained to learn the full spectrum of data quality, from bad to good. By learning the explicit boundary between high- and low-quality content, LACON achieves superior generation quality compared to baselines trained only on filtered data using the same compute budget, proving the significant value of uncurated data.

Paper Structure

This paper contains 22 sections, 3 equations, 18 figures, 20 tables.

Figures (18)

  • Figure 1: Comparison of generations produced by the baseline trained on full raw data and by LACON under different conditioning settings. From left to right: (1) model trained on the full raw dataset without quality conditioning; (2) LACON conditioned on a low aesthetic score $s_{\text{aes}}$; (3) LACON conditioned on a high aesthetic score $s_{\text{aes}}$; (4) LACON jointly conditioned on high aesthetic score $s_{\text{aes}}$ and high clarity score $s_{\text{cla}}$.
  • Figure 2: $\textbf{Overview of LACON}$. High-level training pipeline that repurposes quality signals as explicit attribute conditioning during training.
  • Figure 3: Visualization comparison of LACON-S and LACON-A against Baselines, which demonstrate LACON can still achieve superior visual generation quality even when training on the full set images without filtering.
  • Figure 4: (a) Comparison of images jointly conditioned on aesthetic score $s_{\mathrm{aes}}$ and HSV-luma score $s_{\mathrm{luma}}$. From left to right, the columns correspond to $s_{\text{aes}}=3$, $5$, $7$. From top to bottom, the rows correspond to $s_{\text{luma}}=0.3$, $0.4$, $0.5$. (b) Comparison of images jointly conditioned on clarity lapvar score $s_{\mathrm{cla}}$ and entropy score $s_{\mathrm{ent}}$. From left to right, the columns correspond to $s_{\text{cla}}=200$, $1000$, $2500$. From top to bottom, the rows correspond to $s_{\text{ent}}=5$, $6$, $7$.
  • Figure 5: The evidence of knowledge gap between Baseline-B (the first row) and LACON (the second row). From left to right, the concepts are about morpho butterfly, pastel portrait of elderly woman, mineral terraces, wolf and camel.
  • ...and 13 more figures