Decoder-only Architecture for Streaming End-to-end Speech Recognition
Emiru Tsunoo, Hayato Futami, Yosuke Kashiwagi, Siddhant Arora, Shinji Watanabe
TL;DR
This work presents a decoder-only approach for streaming end-to-end ASR by extracting compact prompts from a blockwise speech subnetwork and feeding them to a decoder. It introduces two prompt sources (CTC prompts and block context prompts) and a blockwise prompt generation scheme with masking, plus a novel prefix-prompt training strategy to bridge training and streaming inference. Empirical results on LibriSpeech and Switchboard show the proposed method achieves notable WER gains, including an 8% relative improvement on LibriSpeech test-other and competitive latency, outperforming encoder–decoder and RNNT baselines. The approach offers a practical, efficient alternative for online ASR by combining prompt-based decoding with selective score fusion and robust training.
Abstract
Decoder-only language models (LMs) have been successfully adopted for speech-processing tasks including automatic speech recognition (ASR). The LMs have ample expressiveness and perform efficiently. This efficiency is a suitable characteristic for streaming applications of ASR. In this work, we propose to use a decoder-only architecture for blockwise streaming ASR. In our approach, speech features are compressed using CTC output and context embedding using blockwise speech subnetwork, and are sequentially provided as prompts to the decoder. The decoder estimates the output tokens promptly at each block. To this end, we also propose a novel training scheme using random-length prefix prompts to make the model robust to the truncated prompts caused by blockwise processing. An experimental comparison shows that our proposed decoder-only streaming ASR achieves 8% relative word error rate reduction in the LibriSpeech test-other set while being twice as fast as the baseline model.
