Table of Contents
Fetching ...

VisG AV-HuBERT: Viseme-Guided AV-HuBERT

Aristeidis Papadopoulos, Rishabh Jain, Naomi Harte

Abstract

Audio-Visual Speech Recognition (AVSR) systems nowadays integrate Large Language Model (LLM) decoders with transformer-based encoders, achieving state-of-the-art results. However, the relative contributions of improved language modelling versus enhanced audiovisual encoding remain unclear. We propose Viseme-Guided AV-HuBERT (VisG AV-HuBERT), a multi-task fine-tuning framework that incorporates auxiliary viseme classification to strengthen the model's reliance on visual articulatory features. By extending AV-HuBERT with a lightweight viseme prediction sub-network, this method explicitly guides the encoder to preserve visual speech information. Evaluated on LRS3, VisG AV-HuBERT achieves comparable or improved performance over the baseline AV-HuBERT, with notable gains under heavy noise conditions. WER reduces from 13.59% to 6.60% (51.4% relative improvement) at -10 dB Signal-to-Noise Ratio (SNR) for Speech noise. Deeper analysis reveals substantial reductions in substitution errors across noise types, demonstrating improved speech unit discrimination. Evaluation on LRS2 confirms generalization capability. Our results demonstrate that explicit viseme modelling enhances encoder representations, and provides a foundation for enhancing noise-robust AVSR through encoder-level improvements.

VisG AV-HuBERT: Viseme-Guided AV-HuBERT

Abstract

Audio-Visual Speech Recognition (AVSR) systems nowadays integrate Large Language Model (LLM) decoders with transformer-based encoders, achieving state-of-the-art results. However, the relative contributions of improved language modelling versus enhanced audiovisual encoding remain unclear. We propose Viseme-Guided AV-HuBERT (VisG AV-HuBERT), a multi-task fine-tuning framework that incorporates auxiliary viseme classification to strengthen the model's reliance on visual articulatory features. By extending AV-HuBERT with a lightweight viseme prediction sub-network, this method explicitly guides the encoder to preserve visual speech information. Evaluated on LRS3, VisG AV-HuBERT achieves comparable or improved performance over the baseline AV-HuBERT, with notable gains under heavy noise conditions. WER reduces from 13.59% to 6.60% (51.4% relative improvement) at -10 dB Signal-to-Noise Ratio (SNR) for Speech noise. Deeper analysis reveals substantial reductions in substitution errors across noise types, demonstrating improved speech unit discrimination. Evaluation on LRS2 confirms generalization capability. Our results demonstrate that explicit viseme modelling enhances encoder representations, and provides a foundation for enhancing noise-robust AVSR through encoder-level improvements.

Paper Structure

This paper contains 29 sections, 3 equations, 12 figures, 12 tables.

Figures (12)

  • Figure 1: VisG AV-HuBERT architecture showing the original AV-HuBERT (dotted outline) and the auxiliary viseme prediction sub-network (solid outline) added during fine-tuning.
  • Figure 2: Example of frame-level viseme label assignment with multiple consecutive frames mapped to a single viseme class.
  • Figure 3: LRS3 WER comparison under different noise conditions for Large model.
  • Figure 4: Error difference (VisG AV-HuBERT WER minus baseline WER) by error type across noise conditions and SNR levels (Large model).
  • Figure 5: CER Difference between VisG AV-HuBERT and AV-HuBERT (Large).
  • ...and 7 more figures