Table of Contents
Fetching ...

Iterate to Differentiate: Enhancing Discriminability and Reliability in Zero-Shot TTS Evaluation

Shengfan Shen, Di Wu, Xingchen Song, Dinghao Zhou, Liumeng Xue, Meng Meng, Jian Luan, Shuai Wang

Abstract

Reliable evaluation of modern zero-shot text-to-speech (TTS) models remains challenging. Subjective tests are costly and hard to reproduce, while objective metrics often saturate, failing to distinguish SOTA systems. To address this, we propose Iterate to Differentiate (I2D), an evaluation framework that recursively synthesizes speech using the model's own outputs as references. Higher-quality models exhibit greater resilience to the distributional shift induced by iterative synthesis, resulting in slower performance degradation. I2D exploits this differential degradation to amplify performance gaps and reveal robustness. By aggregating objective metrics across iterations, I2D improves discriminability and alignment with human judgments, increasing system-level SRCC from 0.118 to 0.464 for UTMOSv2. Experiments on 11 models across Chinese, English, and emotion datasets demonstrate that I2D enables more reliable automated evaluation for zero-shot TTS.

Iterate to Differentiate: Enhancing Discriminability and Reliability in Zero-Shot TTS Evaluation

Abstract

Reliable evaluation of modern zero-shot text-to-speech (TTS) models remains challenging. Subjective tests are costly and hard to reproduce, while objective metrics often saturate, failing to distinguish SOTA systems. To address this, we propose Iterate to Differentiate (I2D), an evaluation framework that recursively synthesizes speech using the model's own outputs as references. Higher-quality models exhibit greater resilience to the distributional shift induced by iterative synthesis, resulting in slower performance degradation. I2D exploits this differential degradation to amplify performance gaps and reveal robustness. By aggregating objective metrics across iterations, I2D improves discriminability and alignment with human judgments, increasing system-level SRCC from 0.118 to 0.464 for UTMOSv2. Experiments on 11 models across Chinese, English, and emotion datasets demonstrate that I2D enables more reliable automated evaluation for zero-shot TTS.

Paper Structure

This paper contains 21 sections, 5 figures, 5 tables, 1 algorithm.

Figures (5)

  • Figure 1: The overall workflow of our evaluation. The dashed arrows indicate that, after the first synthesis, the reference wav and reference text are updated using the target wav and target text. The objective metrics EMO_F1 are applied only to the Emotion dataset.
  • Figure 2: SRCC between objective metrics and subjective dimensions at utterance and system levels (1st and 10th iterations). Utterance-level SRCC is calculated from all individual sample scores, while system-level is derived from model rankings. Specifically, we compare SIM with Spk. Consistency, 1-CER with Content Acc., and UTMOSv2/DNSMOS with Naturalness.
  • Figure 3: Bar chart of objective metrics for all models on Chinese dataset at the 1st and 10th iterations. The dashed lines indicate the values for real audio.
  • Figure 4: System-level SRCC between aggregated UTMOSv2 scores and human ratings of first-iteration speech, computed under different maximum iteration numbers.
  • Figure 5: Cross-model iterative evaluation curves of CosyVoice3-RL and F5-TTS. Solid lines denote the original trajectories without reference swapping, while dashed lines represent the trajectories after exchanging reference audio at the 6th iteration.