EVALALIGN: Supervised Fine-Tuning Multimodal LLMs with Human-Aligned Data for Evaluating Text-to-Image Models
Zhiyu Tan, Xiaomeng Yang, Luozheng Qin, Mengping Yang, Cheng Zhang, Hao Li
TL;DR
EvalAlign tackles the lack of reliable, fine-grained evaluation metrics for text-to-image models by fine-tuning Multimodal Large Language Models (MLLMs) on a large, human-aligned dataset. It defines two evaluation dimensions—image faithfulness and text-image alignment—each assessed via 11 fine-grained skills and aggregated into two metrics, then averaged to yield EvalAlign. The authors construct a dedicated EvalAlign dataset with thousands of prompts, images, and annotated instructions, and show that SFT-tuned MLLMs correlate more closely with human judgments than existing metrics across 24 T2I models, including unseen ones, while remaining cost-efficient. This work provides practical, interpretable evaluation tools and data to guide the development and comparison of text-to-image models, with broader implications for evaluating generative content. It also emphasizes reproducibility and ethical considerations, offering a pathway toward more transparent benchmarking in multimodal generation.
Abstract
The recent advancements in text-to-image generative models have been remarkable. Yet, the field suffers from a lack of evaluation metrics that accurately reflect the performance of these models, particularly lacking fine-grained metrics that can guide the optimization of the models. In this paper, we propose EvalAlign, a metric characterized by its accuracy, stability, and fine granularity. Our approach leverages the capabilities of Multimodal Large Language Models (MLLMs) pre-trained on extensive data. We develop evaluation protocols that focus on two key dimensions: image faithfulness and text-image alignment. Each protocol comprises a set of detailed, fine-grained instructions linked to specific scoring options, enabling precise manual scoring of the generated images. We supervised fine-tune (SFT) the MLLM to align with human evaluative judgments, resulting in a robust evaluation model. Our evaluation across 24 text-to-image generation models demonstrate that EvalAlign not only provides superior metric stability but also aligns more closely with human preferences than existing metrics, confirming its effectiveness and utility in model assessment.
