Table of Contents
Fetching ...

EVALALIGN: Supervised Fine-Tuning Multimodal LLMs with Human-Aligned Data for Evaluating Text-to-Image Models

Zhiyu Tan, Xiaomeng Yang, Luozheng Qin, Mengping Yang, Cheng Zhang, Hao Li

TL;DR

EvalAlign tackles the lack of reliable, fine-grained evaluation metrics for text-to-image models by fine-tuning Multimodal Large Language Models (MLLMs) on a large, human-aligned dataset. It defines two evaluation dimensions—image faithfulness and text-image alignment—each assessed via 11 fine-grained skills and aggregated into two metrics, then averaged to yield EvalAlign. The authors construct a dedicated EvalAlign dataset with thousands of prompts, images, and annotated instructions, and show that SFT-tuned MLLMs correlate more closely with human judgments than existing metrics across 24 T2I models, including unseen ones, while remaining cost-efficient. This work provides practical, interpretable evaluation tools and data to guide the development and comparison of text-to-image models, with broader implications for evaluating generative content. It also emphasizes reproducibility and ethical considerations, offering a pathway toward more transparent benchmarking in multimodal generation.

Abstract

The recent advancements in text-to-image generative models have been remarkable. Yet, the field suffers from a lack of evaluation metrics that accurately reflect the performance of these models, particularly lacking fine-grained metrics that can guide the optimization of the models. In this paper, we propose EvalAlign, a metric characterized by its accuracy, stability, and fine granularity. Our approach leverages the capabilities of Multimodal Large Language Models (MLLMs) pre-trained on extensive data. We develop evaluation protocols that focus on two key dimensions: image faithfulness and text-image alignment. Each protocol comprises a set of detailed, fine-grained instructions linked to specific scoring options, enabling precise manual scoring of the generated images. We supervised fine-tune (SFT) the MLLM to align with human evaluative judgments, resulting in a robust evaluation model. Our evaluation across 24 text-to-image generation models demonstrate that EvalAlign not only provides superior metric stability but also aligns more closely with human preferences than existing metrics, confirming its effectiveness and utility in model assessment.

EVALALIGN: Supervised Fine-Tuning Multimodal LLMs with Human-Aligned Data for Evaluating Text-to-Image Models

TL;DR

EvalAlign tackles the lack of reliable, fine-grained evaluation metrics for text-to-image models by fine-tuning Multimodal Large Language Models (MLLMs) on a large, human-aligned dataset. It defines two evaluation dimensions—image faithfulness and text-image alignment—each assessed via 11 fine-grained skills and aggregated into two metrics, then averaged to yield EvalAlign. The authors construct a dedicated EvalAlign dataset with thousands of prompts, images, and annotated instructions, and show that SFT-tuned MLLMs correlate more closely with human judgments than existing metrics across 24 T2I models, including unseen ones, while remaining cost-efficient. This work provides practical, interpretable evaluation tools and data to guide the development and comparison of text-to-image models, with broader implications for evaluating generative content. It also emphasizes reproducibility and ethical considerations, offering a pathway toward more transparent benchmarking in multimodal generation.

Abstract

The recent advancements in text-to-image generative models have been remarkable. Yet, the field suffers from a lack of evaluation metrics that accurately reflect the performance of these models, particularly lacking fine-grained metrics that can guide the optimization of the models. In this paper, we propose EvalAlign, a metric characterized by its accuracy, stability, and fine granularity. Our approach leverages the capabilities of Multimodal Large Language Models (MLLMs) pre-trained on extensive data. We develop evaluation protocols that focus on two key dimensions: image faithfulness and text-image alignment. Each protocol comprises a set of detailed, fine-grained instructions linked to specific scoring options, enabling precise manual scoring of the generated images. We supervised fine-tune (SFT) the MLLM to align with human evaluative judgments, resulting in a robust evaluation model. Our evaluation across 24 text-to-image generation models demonstrate that EvalAlign not only provides superior metric stability but also aligns more closely with human preferences than existing metrics, confirming its effectiveness and utility in model assessment.

Paper Structure

This paper contains 28 sections, 4 equations, 5 figures, 13 tables.

Figures (5)

  • Figure 1: Overview of EvalAlign. We collect, filter and clean prompts from various sources to ensure their quantity, quality and diversity. We use 8 state-of-the-art text-to-image models to the generate images for evaluation. These synthesized images are then delegated to human annotators for thorough multi-turn annotation. Finally, the annotated data are used to finetune a MLLM to align it with fine-grained human preference, thereby adapting the model to perform text-to-image evaluation on image faithfulness and text-image alignment.
  • Figure 2: Statistics of prompts on evaluating text-to-image alignment. Prompts in our text-to-image alignment benchmark covers a broad range of concepts commonly used in text-to-image generation.
  • Figure 3: Statistics of prompts on evaluating image faithfulness. Prompts in our image faithfulness benchmark covers a broad range of objects and categories that related to image faithfulnes.
  • Figure 4: Demonstration of our user interface. Each time, our specially designed user interface will provide one sample to the annotators. We incorporated four distinct icons to signify various functionalities of the user interface.
  • Figure 5: Qualitative results of EvalAlign dataset and benchmark. As can be concluded, EvalAlign is consistently aligned with fine-grained human preference in terms of image faithfulness and text-image alignment, while other methods fail to do so.