Table of Contents
Fetching ...

Unlocking Strong Supervision: A Data-Centric Study of General-Purpose Audio Pre-Training Methods

Xuanru Zhou, Yiwen Shao, Wei-Cheng Tseng, Dong Yu

Abstract

Current audio pre-training seeks to learn unified representations for broad audio understanding tasks, but it remains fragmented and is fundamentally bottlenecked by its reliance on weak, noisy, and scale-limited labels. Drawing lessons from vision's foundational pre-training blueprint, we argue that the audio field must first establish its own large-scale, strong supervision framework. We introduce a new data-centric pipeline that leverages a high-fidelity captioner to create SOTA-quality captions and the first Unified Tag System (UTS) that bridges speech, music, and environmental sounds. We then conduct a systematic comparative study of different pre-training objectives on these strong source data. Our experiments suggest that data quality and coverage are the primary drivers of performance, while the choice of objective dictates downstream task specialization.

Unlocking Strong Supervision: A Data-Centric Study of General-Purpose Audio Pre-Training Methods

Abstract

Current audio pre-training seeks to learn unified representations for broad audio understanding tasks, but it remains fragmented and is fundamentally bottlenecked by its reliance on weak, noisy, and scale-limited labels. Drawing lessons from vision's foundational pre-training blueprint, we argue that the audio field must first establish its own large-scale, strong supervision framework. We introduce a new data-centric pipeline that leverages a high-fidelity captioner to create SOTA-quality captions and the first Unified Tag System (UTS) that bridges speech, music, and environmental sounds. We then conduct a systematic comparative study of different pre-training objectives on these strong source data. Our experiments suggest that data quality and coverage are the primary drivers of performance, while the choice of objective dictates downstream task specialization.

Paper Structure

This paper contains 42 sections, 7 equations, 3 figures, 8 tables.

Figures (3)

  • Figure 1: An overview of our method. (a) Audio Tagging Pipeline: we generate tags by processing raw audio through Qwen3-Omni-CaptionerQwen3-Omni and an LLM Parser qwen2025qwen25technicalreport. (b) Pre-Training Pipeline: the audio encoder is then pre-trained using these tags with two supervision objectives: a discriminative multi-tag classification (MTC) Objective (using a linear classifier) and a generative parellel decoding (PAR) Objective (using a text decoder with a bidirectional mask).
  • Figure 2: Analysis of our tag system. (a) Word cloud of the most frequent tags in our parsed tags, illustrating its diverse vocabulary. (b) t-SNE comparison of our UTS-1.5k (blue) and AudioSet (red), demonstrating our system's superior semantic coverage and density. (c) The characteristic long-tail tag frequency distribution of all parsed tags.
  • Figure 3: Impact of Tag System Size