Table of Contents
Fetching ...

PortraitCraft: A Benchmark for Portrait Composition Understanding and Generation

Yuyang Sha, Zijie Lou, Youyun Tang, Xiaochao Qu, Haoxiang Li, Ting Liu, Luoqi Liu

Abstract

Portrait composition plays a central role in portrait aesthetics and visual communication, yet existing datasets and benchmarks mainly focus on coarse aesthetic scoring, generic image aesthetics, or unconstrained portrait generation. This limits systematic research on structured portrait composition analysis and controllable portrait generation under explicit composition requirements. In this paper, we introduce PortraitCraft, a unified benchmark for portrait composition understanding and generation. PortraitCraft is built on a dataset of approximately 50,000 curated real portrait images with structured multi-level supervision, including global composition scores, annotations over 13 composition attributes, attribute-level explanation texts, visual question answering pairs, and composition-oriented textual descriptions for generation. Based on this dataset, we establish two complementary benchmark tasks for composition understanding and composition-aware generation within a unified framework. The first evaluates portrait composition understanding through score prediction, fine-grained attribute reasoning, and image-grounded visual question answering, while the second evaluates portrait generation from structured composition descriptions under explicit composition constraints. We further define standardized evaluation protocols and provide reference baseline results with representative multimodal models. PortraitCraft provides a comprehensive benchmark for future research on fine-grained portrait understanding, interpretable aesthetic assessment, and controllable portrait generation.

PortraitCraft: A Benchmark for Portrait Composition Understanding and Generation

Abstract

Portrait composition plays a central role in portrait aesthetics and visual communication, yet existing datasets and benchmarks mainly focus on coarse aesthetic scoring, generic image aesthetics, or unconstrained portrait generation. This limits systematic research on structured portrait composition analysis and controllable portrait generation under explicit composition requirements. In this paper, we introduce PortraitCraft, a unified benchmark for portrait composition understanding and generation. PortraitCraft is built on a dataset of approximately 50,000 curated real portrait images with structured multi-level supervision, including global composition scores, annotations over 13 composition attributes, attribute-level explanation texts, visual question answering pairs, and composition-oriented textual descriptions for generation. Based on this dataset, we establish two complementary benchmark tasks for composition understanding and composition-aware generation within a unified framework. The first evaluates portrait composition understanding through score prediction, fine-grained attribute reasoning, and image-grounded visual question answering, while the second evaluates portrait generation from structured composition descriptions under explicit composition constraints. We further define standardized evaluation protocols and provide reference baseline results with representative multimodal models. PortraitCraft provides a comprehensive benchmark for future research on fine-grained portrait understanding, interpretable aesthetic assessment, and controllable portrait generation.

Paper Structure

This paper contains 24 sections, 3 figures, 3 tables.

Figures (3)

  • Figure 1: Overview of the PortraitCraft benchmark. PortraitCraft is built on 50,000 curated real portrait images and provides a unified framework for portrait composition understanding and generation. The benchmark includes two related tasks. Track 1 evaluates portrait composition understanding through overall score prediction, fine-grained attribute-level analysis, and image-based visual question answering. Track 2 evaluates portrait composition generation from structured composition descriptions under explicit composition constraints. Together, these two tasks connect composition analysis with composition-guided generation and support structured research on portrait composition.
  • Figure 2: Statistics of Track 1 composition annotations. (a) Smoothed density distributions of global composition scores for the training and test sets. The test set shows a broader distribution and slightly lower average scores, indicating increased evaluation difficulty. (b) Radar plot of mean scores across the 13 composition attributes. The variation across attributes reflects the heterogeneous characteristics of portrait composition and supports fine-grained evaluation beyond a single overall score.
  • Figure 3: Qualitative results on Track 2: Portrait Composition Generation. Two representative examples are shown. For each example, Original denotes the reference portrait image associated with the structured composition description, and Generated denotes the image produced by the model from the same composition specification. The comparison shows that the generated portraits follow key composition cues such as subject placement, spatial organization, and visual emphasis, demonstrating the feasibility of composition-aware portrait generation under the proposed benchmark setting.