Table of Contents
Fetching ...

TRACE: Training-Free Partial Audio Deepfake Detection via Embedding Trajectory Analysis of Speech Foundation Models

Awais Khan, Muhammad Umar Farooq, Kutub Uddin, Khalid Malik

Abstract

Partial audio deepfakes, where synthesized segments are spliced into genuine recordings, are particularly deceptive because most of the audio remains authentic. Existing detectors are supervised: they require frame-level annotations, overfit to specific synthesis pipelines, and must be retrained as new generative models emerge. We argue that this supervision is unnecessary. We hypothesize that speech foundation models implicitly encode a forensic signal: genuine speech forms smooth, slowly varying embedding trajectories, while splice boundaries introduce abrupt disruptions in frame-level transitions. Building on this, we propose TRACE (Training-free Representation-based Audio Countermeasure via Embedding dynamics), a training-free framework that detects partial audio deepfakes by analyzing the first-order dynamics of frozen speech foundation model representations without any training, labeled data, or architectural modification. We evaluate TRACE on four benchmarks that span two languages using six speech foundation models. In PartialSpoof, TRACE achieves 8.08% EER, competitive with fine-tuned supervised baselines. In LlamaPartialSpoof, the most challenging benchmark featuring LLM-driven commercial synthesis, TRACE surpasses a supervised baseline outright (24.12% vs. 24.49% EER) without any target-domain data. These results show that temporal dynamics in speech foundation models provide an effective, generalize signal for training-free audio forensics.

TRACE: Training-Free Partial Audio Deepfake Detection via Embedding Trajectory Analysis of Speech Foundation Models

Abstract

Partial audio deepfakes, where synthesized segments are spliced into genuine recordings, are particularly deceptive because most of the audio remains authentic. Existing detectors are supervised: they require frame-level annotations, overfit to specific synthesis pipelines, and must be retrained as new generative models emerge. We argue that this supervision is unnecessary. We hypothesize that speech foundation models implicitly encode a forensic signal: genuine speech forms smooth, slowly varying embedding trajectories, while splice boundaries introduce abrupt disruptions in frame-level transitions. Building on this, we propose TRACE (Training-free Representation-based Audio Countermeasure via Embedding dynamics), a training-free framework that detects partial audio deepfakes by analyzing the first-order dynamics of frozen speech foundation model representations without any training, labeled data, or architectural modification. We evaluate TRACE on four benchmarks that span two languages using six speech foundation models. In PartialSpoof, TRACE achieves 8.08% EER, competitive with fine-tuned supervised baselines. In LlamaPartialSpoof, the most challenging benchmark featuring LLM-driven commercial synthesis, TRACE surpasses a supervised baseline outright (24.12% vs. 24.49% EER) without any target-domain data. These results show that temporal dynamics in speech foundation models provide an effective, generalize signal for training-free audio forensics.

Paper Structure

This paper contains 25 sections, 10 equations, 7 figures, 7 tables.

Figures (7)

  • Figure 1: Overview of the TRACE pipeline. A raw waveform is passed through a frozen speech foundation model (WavLM-Large, layer 18). Frame embeddings are L2-normalized van2017l2 onto the unit hypersphere, and the chord distance between consecutive projections forms the first-order dynamics sequence $\{\text{F1}_t\}$. Closed-form statistics are extracted and linearly fused into a scalar detection score, which is orientation-calibrated and threshold to produce the final bonafide or spoof decision. No model parameters are updated at any stage.
  • Figure 2: Score distributions of TRACE across four benchmarks: (a) PartialSpoof, (b) HAD, (c) ADD 2023, (d) LlamaPartialSpoof. The consistent directionality across datasets confirms language and synthesis-method independence of the TRACE.
  • Figure 3: Cross-dataset generalization of TRACE: Transfer EER (hatched) vs Free EER (solid) across three out-of-domain test sets. LlamaPartialSpoof includes the supervised ps-train baseline (red dashed) for reference.
  • Figure 4: Encoder $\times$ statistic EER heatmap on PartialSpoof. F1 consistently outperforms F2 across all encoders. WavLM-Large + F1-std achieves the best EER (16.4%). Kurtosis-based features are unstable due to sensitivity to outliers.
  • Figure 5: Progressive improvement of TRACE on the PartialSpoof dataset. Horizontal dashed lines denote supervised baselines reported in zhang2022partialspoof.
  • ...and 2 more figures