Table of Contents
Fetching ...

Back to Basics: Revisiting ASR in the Age of Voice Agents

Geeyang Tay, Wentao Ma, Jaewon Lee, Yuzhi Tang, Daniel Lee, Weisu Yin, Dongming Shen, Silin Meng, Yi Zhu, Mu Li, Alex Smola

Abstract

Automatic speech recognition (ASR) systems have achieved near-human accuracy on curated benchmarks, yet still fail in real-world voice agents under conditions that current evaluations do not systematically cover. Without diagnostic tools that isolate specific failure factors, practitioners cannot anticipate which conditions, in which languages, will cause what degree of degradation. We introduce WildASR, a multilingual (four-language) diagnostic benchmark sourced entirely from real human speech that factorizes ASR robustness along three axes: environmental degradation, demographic shift, and linguistic diversity. Evaluating seven widely used ASR systems, we find severe and uneven performance degradation, and model robustness does not transfer across languages or conditions. Critically, models often hallucinate plausible but unspoken content under partial or degraded inputs, creating concrete safety risks for downstream agent behavior. Our results demonstrate that targeted, factor-isolated evaluation is essential for understanding and improving ASR reliability in production systems. Besides the benchmark itself, we also present three analytical tools that practitioners can use to guide deployment decisions.

Back to Basics: Revisiting ASR in the Age of Voice Agents

Abstract

Automatic speech recognition (ASR) systems have achieved near-human accuracy on curated benchmarks, yet still fail in real-world voice agents under conditions that current evaluations do not systematically cover. Without diagnostic tools that isolate specific failure factors, practitioners cannot anticipate which conditions, in which languages, will cause what degree of degradation. We introduce WildASR, a multilingual (four-language) diagnostic benchmark sourced entirely from real human speech that factorizes ASR robustness along three axes: environmental degradation, demographic shift, and linguistic diversity. Evaluating seven widely used ASR systems, we find severe and uneven performance degradation, and model robustness does not transfer across languages or conditions. Critically, models often hallucinate plausible but unspoken content under partial or degraded inputs, creating concrete safety risks for downstream agent behavior. Our results demonstrate that targeted, factor-isolated evaluation is essential for understanding and improving ASR reliability in production systems. Besides the benchmark itself, we also present three analytical tools that practitioners can use to guide deployment decisions.

Paper Structure

This paper contains 23 sections, 5 figures, 15 tables.

Figures (5)

  • Figure 1: Multilingual ASR robustness under real-world distribution shifts in WildASR. We evaluate seven ASR systems across four languages and aggregate performance over three OOD dimensions. The horizontal line denotes the in-distribution clean-set model-average reference (5.7%), defined as the average error rate on the FLEURS test set across all models and languages. The sharp and uneven degradation across OOD conditions shows that human-parity performance on in-distribution data does not reliably transfer to real-world settings.
  • Figure 2: Error heatmap for seven ASR models on WildASR. Each cell visualizes error rate (WER for EN and CER for CJK), with lighter colors indicating lower error. This patchy landscape reveals that ASR systems still exhibit large performance degradation and uneven robustness gaps.
  • Figure 3: ASR error dynamics under increasing reverberation for Qwen2-Audio on FLEURS (top: English, bottom: Chinese).
  • Figure 4: Prompt sensitivity of Gemini 2.5 Pro on demographic subsets across ten paraphrased prompts (EN/ZH).
  • Figure 5: Accent Distribution in WildASR. Left: English. Right: Chinese.