Table of Contents
Fetching ...

A Human-Inspired Decoupled Architecture for Efficient Audio Representation Learning

Harunori Kawano, Takeshi Sasaki

Abstract

While self-supervised learning (SSL) has revolutionized audio representation, the excessive parameterization and quadratic computational cost of standard Transformers limit their deployment on resource-constrained devices. To address this bottleneck, we propose HEAR (Human-inspired Efficient Audio Representation), a novel decoupled architecture. Inspired by the human cognitive ability to isolate local acoustic features from global context, HEAR splits the processing pipeline into two dedicated modules: an Acoustic Model for local feature extraction and a Task Model for global semantic integration. Coupled with an Acoustic Tokenizer trained via knowledge distillation, our approach enables robust Masked Audio Modeling (MAM). Extensive experiments demonstrate that HEAR requires only 15M parameters and 9.47 GFLOPs for inference, operating at a fraction of the computational cost of conventional foundation models (which typically require 85M-94M parameters). Despite this high efficiency, HEAR achieves highly competitive performance across diverse audio classification benchmarks. The code and pre-trained models are available at https://github.com/HarunoriKawano/HEAR

A Human-Inspired Decoupled Architecture for Efficient Audio Representation Learning

Abstract

While self-supervised learning (SSL) has revolutionized audio representation, the excessive parameterization and quadratic computational cost of standard Transformers limit their deployment on resource-constrained devices. To address this bottleneck, we propose HEAR (Human-inspired Efficient Audio Representation), a novel decoupled architecture. Inspired by the human cognitive ability to isolate local acoustic features from global context, HEAR splits the processing pipeline into two dedicated modules: an Acoustic Model for local feature extraction and a Task Model for global semantic integration. Coupled with an Acoustic Tokenizer trained via knowledge distillation, our approach enables robust Masked Audio Modeling (MAM). Extensive experiments demonstrate that HEAR requires only 15M parameters and 9.47 GFLOPs for inference, operating at a fraction of the computational cost of conventional foundation models (which typically require 85M-94M parameters). Despite this high efficiency, HEAR achieves highly competitive performance across diverse audio classification benchmarks. The code and pre-trained models are available at https://github.com/HarunoriKawano/HEAR

Paper Structure

This paper contains 16 sections, 4 equations, 2 figures, 4 tables.

Figures (2)

  • Figure 1: Overview of the pre-training pipeline for the proposed HEAR framework. (a) Knowledge-Distilled Tokenizer Training: An acoustic tokenizer is trained to encode continuous mel-spectrogram patches into discrete semantic tokens, optimized via signal reconstruction, codebook diversity, and knowledge distillation from a pre-trained teacher model. (b) MAM: Using the frozen tokenizer, the Acoustic Model is pre-trained to predict the discrete tokens of independent random masked patches, enabling robust local feature extraction.
  • Figure 2: Comprehensive flow of the downstream adaptation, illustrating the feature gating integration of the Acoustic Model outputs and raw power spectrum into the Task Model.