Table of Contents
Fetching ...

Woosh: A Sound Effects Foundation Model

Gaëtan Hadjeres, Marc Ferras, Khaled Koutini, Benno Weck, Alexandre Bittar, Thomas Hummel, Zineb Lahrici, Hakim Missoum, Joan Serrà, Yuki Mitsufuji

Abstract

The audio research community depends on open generative models as foundational tools for building novel approaches and establishing baselines. In this report, we present Woosh, Sony AI's publicly released sound effect foundation model, detailing its architecture, training process, and an evaluation against other popular open models. Being optimized for sound effects, we provide (1) a high-quality audio encoder/decoder model and (2) a text-audio alignment model for conditioning, together with (3) text-to-audio and (4) video-to-audio generative models. Distilled text-to-audio and video-to-audio models are also included in the release, allowing for low-resource operation and fast inference. Our evaluation on both public and private data shows competitive or better performance for each module when compared to existing open alternatives like StableAudio-Open and TangoFlux. Inference code and model weights are available at https://github.com/SonyResearch/Woosh. Demo samples can be found at https://sonyresearch.github.io/Woosh/.

Woosh: A Sound Effects Foundation Model

Abstract

The audio research community depends on open generative models as foundational tools for building novel approaches and establishing baselines. In this report, we present Woosh, Sony AI's publicly released sound effect foundation model, detailing its architecture, training process, and an evaluation against other popular open models. Being optimized for sound effects, we provide (1) a high-quality audio encoder/decoder model and (2) a text-audio alignment model for conditioning, together with (3) text-to-audio and (4) video-to-audio generative models. Distilled text-to-audio and video-to-audio models are also included in the release, allowing for low-resource operation and fast inference. Our evaluation on both public and private data shows competitive or better performance for each module when compared to existing open alternatives like StableAudio-Open and TangoFlux. Inference code and model weights are available at https://github.com/SonyResearch/Woosh. Demo samples can be found at https://sonyresearch.github.io/Woosh/.

Paper Structure

This paper contains 27 sections, 14 equations, 5 figures, 5 tables, 1 algorithm.

Figures (5)

  • Figure 1: Inference-time layout of the Woosh-Flow (left) and Woosh-VFlow (right) models for text-to-audio and video-to-audio generation, respectively.
  • Figure 2: VOCOS decoder architecture as a cascade on ConvNeXt blocks, used in Woosh-AE.
  • Figure 3: Woosh-CLAP training block diagram for a positive pair of samples. Only the text encoder is used at generation time as conditioning.
  • Figure 4: Multimodal transformer stack in the Woosh-Flow diffusion model, formed by MultiStream (MS) and SingleStream (SS) blocks.
  • Figure 5: MultiStream transformer block diagram (left). Both self-attention and feed-forward network (FFN) outputs are computed independently for each modality sequence. SingleStream transformer block diagram (right). Self-attention is computed on the concatenation of modality sequences along the time dimension. Instead of an FFN block, non-linear features are in a parallel path and added after the attention project layer.