Table of Contents
Fetching ...

AND: Audio Network Dissection for Interpreting Deep Acoustic Models

Tung-Yu Wu, Yu-Xiang Lin, Tsui-Wei Weng

TL;DR

AND introduces the first Audio Network Dissection framework that uses a three-module LLM-based pipeline to translate highly-activated acoustic neurons into natural-language descriptions. By combining closed-set concept identification, summary calibration, and open-set concept extraction, AND delivers interpretable neuron-level insights and enables concept-specific pruning for audio machine unlearning. The work demonstrates that acoustic models rely on combinations of basic features rather than high-level abstractions, and that supervised training narrows neuron attention while self-supervised learning fosters broader, polysemantic representations. This framework advances audio model interpretability and offers practical tools for model auditing and unlearning in real-world tasks, with broad implications for understanding how training regimes shape acoustic representations.

Abstract

Neuron-level interpretations aim to explain network behaviors and properties by investigating neurons responsive to specific perceptual or structural input patterns. Although there is emerging work in the vision and language domains, none is explored for acoustic models. To bridge the gap, we introduce $\textit{AND}$, the first $\textbf{A}$udio $\textbf{N}$etwork $\textbf{D}$issection framework that automatically establishes natural language explanations of acoustic neurons based on highly-responsive audio. $\textit{AND}$ features the use of LLMs to summarize mutual acoustic features and identities among audio. Extensive experiments are conducted to verify $\textit{AND}$'s precise and informative descriptions. In addition, we demonstrate a potential use of $\textit{AND}$ for audio machine unlearning by conducting concept-specific pruning based on the generated descriptions. Finally, we highlight two acoustic model behaviors with analysis by $\textit{AND}$: (i) models discriminate audio with a combination of basic acoustic features rather than high-level abstract concepts; (ii) training strategies affect model behaviors and neuron interpretability -- supervised training guides neurons to gradually narrow their attention, while self-supervised learning encourages neurons to be polysemantic for exploring high-level features.

AND: Audio Network Dissection for Interpreting Deep Acoustic Models

TL;DR

AND introduces the first Audio Network Dissection framework that uses a three-module LLM-based pipeline to translate highly-activated acoustic neurons into natural-language descriptions. By combining closed-set concept identification, summary calibration, and open-set concept extraction, AND delivers interpretable neuron-level insights and enables concept-specific pruning for audio machine unlearning. The work demonstrates that acoustic models rely on combinations of basic features rather than high-level abstractions, and that supervised training narrows neuron attention while self-supervised learning fosters broader, polysemantic representations. This framework advances audio model interpretability and offers practical tools for model auditing and unlearning in real-world tasks, with broad implications for understanding how training regimes shape acoustic representations.

Abstract

Neuron-level interpretations aim to explain network behaviors and properties by investigating neurons responsive to specific perceptual or structural input patterns. Although there is emerging work in the vision and language domains, none is explored for acoustic models. To bridge the gap, we introduce , the first udio etwork issection framework that automatically establishes natural language explanations of acoustic neurons based on highly-responsive audio. features the use of LLMs to summarize mutual acoustic features and identities among audio. Extensive experiments are conducted to verify 's precise and informative descriptions. In addition, we demonstrate a potential use of for audio machine unlearning by conducting concept-specific pruning based on the generated descriptions. Finally, we highlight two acoustic model behaviors with analysis by : (i) models discriminate audio with a combination of basic acoustic features rather than high-level abstract concepts; (ii) training strategies affect model behaviors and neuron interpretability -- supervised training guides neurons to gradually narrow their attention, while self-supervised learning encourages neurons to be polysemantic for exploring high-level features.

Paper Structure

This paper contains 34 sections, 2 equations, 14 figures, 8 tables, 1 algorithm.

Figures (14)

  • Figure 1: The proposed framework of AND. Taking concept set $D_{c}$, probing dataset $D_{p}$, and target network $F(\cdot)$ as inputs, AND employs a coarse-to-fine LLM-based pipeline to analyze each neuron's highly-responsive acoustic concepts by three specialized modules: (A) closed-concept identification, (B) summary calibration, and (C) open-concept identification. Closed-ended concept $C_{\text{closed-set}}$, calibrated summary $S^{c}_{h}$, and open-ended concept $C_{\text{open-set}}$ are generated as outputs of AND.
  • Figure 2: Detailed illustration of AND's three specialized modules: (A) closed-concept identification, (B) summary calibration, and (C) open-concept identification.
  • Figure 3: Counting of adjectives for AST's all linear layer neurons generated by open-concept identification module. We show the top-10 most-used adjectives here.
  • Figure 4: Feature importance analysis of AST on ESC50, measured by module C in AND. The x-axis is the percentage of ablated linear layer neurons, and the y-axis is the testing performance on ESC50 after pruning.
  • Figure 5: Number of averaged adjectives per neuron in different transformer blocks of AST, BEATs-finetuned, and BEATs-frozen.
  • ...and 9 more figures