Table of Contents
Fetching ...

EmoScene: A Dual-space Dataset for Controllable Affective Image Generation

Li He, Longtai Zhang, Wenqiang Zhang, Yan Wang, Lizhe Qi

Abstract

Text-to-image diffusion models have achieved high visual fidelity, yet precise control over scene semantics and fine-grained affective tone remains challenging. Human visual affect arises from the rapid integration of contextual meaning, including valence, arousal, and dominance, with perceptual cues such as color harmony, luminance contrast, texture variation, curvature, and spatial layout. However, current text-to-image models rarely represent affective and perceptual factors within a unified representation, which limits their ability to synthesize scenes with coherent and nuanced emotional intent. To address this gap, we construct EmoScene, a large-scale dual-space emotion dataset that jointly encodes affective dimensions and perceptual attributes, with contextual semantics provided as supporting annotations. EmoScene contains 1.2M images across more than three hundred real-world scene categories, each annotated with discrete emotion labels, continuous VAD values, perceptual descriptors and textual captions. Multi-space analyses reveal how discrete emotions occupy the VAD space and how affect systematically correlates with scene-level perceptual factors. To benchmark EmoScene, we provide a lightweight reference baseline that injects dual-space controls into a frozen diffusion backbone via shallow cross-attention modulation, serving as a reproducible probe of affect controllability enabled by dual-space supervision.

EmoScene: A Dual-space Dataset for Controllable Affective Image Generation

Abstract

Text-to-image diffusion models have achieved high visual fidelity, yet precise control over scene semantics and fine-grained affective tone remains challenging. Human visual affect arises from the rapid integration of contextual meaning, including valence, arousal, and dominance, with perceptual cues such as color harmony, luminance contrast, texture variation, curvature, and spatial layout. However, current text-to-image models rarely represent affective and perceptual factors within a unified representation, which limits their ability to synthesize scenes with coherent and nuanced emotional intent. To address this gap, we construct EmoScene, a large-scale dual-space emotion dataset that jointly encodes affective dimensions and perceptual attributes, with contextual semantics provided as supporting annotations. EmoScene contains 1.2M images across more than three hundred real-world scene categories, each annotated with discrete emotion labels, continuous VAD values, perceptual descriptors and textual captions. Multi-space analyses reveal how discrete emotions occupy the VAD space and how affect systematically correlates with scene-level perceptual factors. To benchmark EmoScene, we provide a lightweight reference baseline that injects dual-space controls into a frozen diffusion backbone via shallow cross-attention modulation, serving as a reproducible probe of affect controllability enabled by dual-space supervision.

Paper Structure

This paper contains 45 sections, 20 equations, 24 figures, 3 tables.

Figures (24)

  • Figure 1: Multi-space attributes in the EmoScene Dataset: The proposed EmoScene dataset provides a dual-space emotion representation, jointly modeling affective, perceptual, and contextual dimensions.The total count of annotated attributes is shown in circled boxes. For each subspace, we display representative examples: affective cues include discrete emotions and their VAD distribution; perceptual cues cover color proportion, luminance, saturation, curvature, and visual complexity; contextual cues encompass human attributes, objects, scenes, and text-based descriptions (shown via word cloud). Together, these dimensions illustrate the rich cross-space diversity of EmoScene.
  • Figure 2: Dual-Space Annotation Pipeline. The pipeline proceeds from Step 1 to Step 11. Step 1 collects raw images from open photographic, artistic platforms. Step 2 filters low-quality samples using aesthetic and sharpness scores, and Step 3 verifies or corrects scene labels and image--text consistency. Steps 4 to 5 detect human subjects and objects and record their attributes and interactions. Step 6 generates short and long descriptions with human-aware and multimodal language models. Steps 7 to 8 assign discrete emotion labels and continuous VAD scores, forming the affective space. Steps 9 to 10 extract color statistics and structural features, forming the perceptual space. Finally, Step 11 performs human-in-the-loop review and active-learning updates. Implementation details for each step are given in the supplementary material.
  • Figure 3: Representative examples of EmoScene's dual-space annotations.
  • Figure 4: Structure of the affective space represented in the Valence--Arousal plane. Different emotions occupy distinct yet continuous regions, reflecting the organization of the affective space.
  • Figure 5: Perceptual color composition across emotion categories. Top: proportions of achromatic and chromatic components.Bottom: distributions over eight chromatic hues.
  • ...and 19 more figures