Table of Contents
Fetching ...

Distilling Conversations: Abstract Compression of Conversational Audio Context for LLM-based ASR

Shashi Kumar, Esaú Villatoro-Tello, Sergio Burdisso, Kadri Hacioglu, Thibault Bañeras-Roux, Hasindri Watawana, Dairazalia Sanchez-Cortes, Srikanth Madikeri, Petr Motlicek, Andreas Stolcke

Abstract

Standard LLM-based speech recognition systems typically process utterances in isolation, limiting their ability to leverage conversational context. In this work, we study whether multimodal context from prior turns improves LLM-based ASR and how to represent that context efficiently. We find that, after supervised multi-turn training, conversational context mainly helps with the recognition of contextual entities. However, conditioning on raw context is expensive because the prior-turn audio token sequence grows rapidly with conversation length. To address this, we propose Abstract Compression, which replaces the audio portion of prior turns with a fixed number of learned latent tokens while retaining corresponding transcripts explicitly. On both in-domain and out-of-domain test sets, the compressed model recovers part of the gains of raw-context conditioning with a smaller prior-turn audio footprint. We also provide targeted analyses of the compression setup and its trade-offs.

Distilling Conversations: Abstract Compression of Conversational Audio Context for LLM-based ASR

Abstract

Standard LLM-based speech recognition systems typically process utterances in isolation, limiting their ability to leverage conversational context. In this work, we study whether multimodal context from prior turns improves LLM-based ASR and how to represent that context efficiently. We find that, after supervised multi-turn training, conversational context mainly helps with the recognition of contextual entities. However, conditioning on raw context is expensive because the prior-turn audio token sequence grows rapidly with conversation length. To address this, we propose Abstract Compression, which replaces the audio portion of prior turns with a fixed number of learned latent tokens while retaining corresponding transcripts explicitly. On both in-domain and out-of-domain test sets, the compressed model recovers part of the gains of raw-context conditioning with a smaller prior-turn audio footprint. We also provide targeted analyses of the compression setup and its trade-offs.

Paper Structure

This paper contains 38 sections, 9 equations, 2 figures, 3 tables.

Figures (2)

  • Figure 1: Overview of Abstract Compression for context-aware ASR. Prior conversational turns are represented by both transcript and audio. In our implementation, the audio from each prior turn is distilled into a fixed number of latent tokens, while transcript is retained explicitly. These compressed context representations are provided to the LLM alongside the current turn’s full-resolution audio tokens. This preserves part of the conversational context while reducing the cost of raw-context conditioning.
  • Figure 2: Compression rates as a function of context size on DefinedAI test set. Top: audio compression rate $\rho_n^{\text{audio}}$ (see Eq. \ref{['eq:rho_aud']}). Bottom: overall context compression rate $\rho_n^{\text{context}}$ (see Eq. \ref{['eq:rho_hist']}). Lines show the median across examples and shaded regions denote interquartile range.