Table of Contents
Fetching ...

ChronoMagic-Bench: A Benchmark for Metamorphic Evaluation of Text-to-Time-lapse Video Generation

Shenghai Yuan, Jinfa Huang, Yongqi Xu, Yaoyang Liu, Shaofeng Zhang, Yujun Shi, Ruijie Zhu, Xinhua Cheng, Jiebo Luo, Li Yuan

TL;DR

ChronoMagic-Bench introduces a metamorphic, time-lapse-focused benchmark for text-to-video generation, addressing the lack of metrics for metamorphic amplitude and temporal coherence. It provides 1,649 prompts across four physical-prior categories, two automatic metrics (MTScore and CHScore), and a large-scale ChronoMagic-Pro dataset with 460k time-lapse clips to enable comprehensive evaluation of both open- and closed-source T2V models. The study demonstrates that most models struggle to produce high-amplitude metamorphic content and coherent time evolution, highlighting the necessity of specialized benchmarks and metrics. Collectively, ChronoMagic-Bench and ChronoMagic-Pro lay groundwork for standardized evaluation, model selection guidance, and data resources to advance metamorphic time-lapse video generation.

Abstract

We propose a novel text-to-video (T2V) generation benchmark, ChronoMagic-Bench, to evaluate the temporal and metamorphic capabilities of the T2V models (e.g. Sora and Lumiere) in time-lapse video generation. In contrast to existing benchmarks that focus on visual quality and textual relevance of generated videos, ChronoMagic-Bench focuses on the model's ability to generate time-lapse videos with significant metamorphic amplitude and temporal coherence. The benchmark probes T2V models for their physics, biology, and chemistry capabilities, in a free-form text query. For these purposes, ChronoMagic-Bench introduces 1,649 prompts and real-world videos as references, categorized into four major types of time-lapse videos: biological, human-created, meteorological, and physical phenomena, which are further divided into 75 subcategories. This categorization comprehensively evaluates the model's capacity to handle diverse and complex transformations. To accurately align human preference with the benchmark, we introduce two new automatic metrics, MTScore and CHScore, to evaluate the videos' metamorphic attributes and temporal coherence. MTScore measures the metamorphic amplitude, reflecting the degree of change over time, while CHScore assesses the temporal coherence, ensuring the generated videos maintain logical progression and continuity. Based on ChronoMagic-Bench, we conduct comprehensive manual evaluations of ten representative T2V models, revealing their strengths and weaknesses across different categories of prompts, and providing a thorough evaluation framework that addresses current gaps in video generation research. Moreover, we create a large-scale ChronoMagic-Pro dataset, containing 460k high-quality pairs of 720p time-lapse videos and detailed captions ensuring high physical pertinence and large metamorphic amplitude. [Homepage](https://pku-yuangroup.github.io/ChronoMagic-Bench/).

ChronoMagic-Bench: A Benchmark for Metamorphic Evaluation of Text-to-Time-lapse Video Generation

TL;DR

ChronoMagic-Bench introduces a metamorphic, time-lapse-focused benchmark for text-to-video generation, addressing the lack of metrics for metamorphic amplitude and temporal coherence. It provides 1,649 prompts across four physical-prior categories, two automatic metrics (MTScore and CHScore), and a large-scale ChronoMagic-Pro dataset with 460k time-lapse clips to enable comprehensive evaluation of both open- and closed-source T2V models. The study demonstrates that most models struggle to produce high-amplitude metamorphic content and coherent time evolution, highlighting the necessity of specialized benchmarks and metrics. Collectively, ChronoMagic-Bench and ChronoMagic-Pro lay groundwork for standardized evaluation, model selection guidance, and data resources to advance metamorphic time-lapse video generation.

Abstract

We propose a novel text-to-video (T2V) generation benchmark, ChronoMagic-Bench, to evaluate the temporal and metamorphic capabilities of the T2V models (e.g. Sora and Lumiere) in time-lapse video generation. In contrast to existing benchmarks that focus on visual quality and textual relevance of generated videos, ChronoMagic-Bench focuses on the model's ability to generate time-lapse videos with significant metamorphic amplitude and temporal coherence. The benchmark probes T2V models for their physics, biology, and chemistry capabilities, in a free-form text query. For these purposes, ChronoMagic-Bench introduces 1,649 prompts and real-world videos as references, categorized into four major types of time-lapse videos: biological, human-created, meteorological, and physical phenomena, which are further divided into 75 subcategories. This categorization comprehensively evaluates the model's capacity to handle diverse and complex transformations. To accurately align human preference with the benchmark, we introduce two new automatic metrics, MTScore and CHScore, to evaluate the videos' metamorphic attributes and temporal coherence. MTScore measures the metamorphic amplitude, reflecting the degree of change over time, while CHScore assesses the temporal coherence, ensuring the generated videos maintain logical progression and continuity. Based on ChronoMagic-Bench, we conduct comprehensive manual evaluations of ten representative T2V models, revealing their strengths and weaknesses across different categories of prompts, and providing a thorough evaluation framework that addresses current gaps in video generation research. Moreover, we create a large-scale ChronoMagic-Pro dataset, containing 460k high-quality pairs of 720p time-lapse videos and detailed captions ensuring high physical pertinence and large metamorphic amplitude. [Homepage](https://pku-yuangroup.github.io/ChronoMagic-Bench/).

Paper Structure

This paper contains 35 sections, 11 equations, 17 figures, 6 tables, 1 algorithm.

Figures (17)

  • Figure 1: Example of four major categories from ChronoMagic-Bench. These categories fully encompass the physical world, allowing our benchmark and dataset to empower the community.
  • Figure 2: Categories of Time-lapse Videos: Firstly, we classify the videos into four major categories (biological, human creation, meteorological, physical), which are further subdivided into 75 subcategories (e.g. animal, parking, beach, melting).
  • Figure 3: The word cloud and word count range of the prompts in the ChronoMagic-Bench. It shows that prompts mainly describe videos with large metamorphic amplitude and long persistence.
  • Figure 4: Video clips statistics in ChronoMagic-Pro. The dataset includes a diverse range of categories, clip durations and caption lengths, with most of the videos being in 720P resolution.
  • Figure 5: Qualitative comparison with different T2V generation methods for the text-to-video task in ChronoMagic-Bench. Most models can not follow instructions to generate time-lapse videos.
  • ...and 12 more figures