Table of Contents
Fetching ...

DynFOA: Generating First-Order Ambisonics with Conditional Diffusion for Dynamic and Acoustically Complex 360-Degree Videos

Ziyu Luo, Lin Chen, Qiang Qu, Xiaoming Chen, Yiran Shen

Abstract

Spatial audio is crucial for immersive 360-degree video experiences, yet most 360-degree videos lack it due to the difficulty of capturing spatial audio during recording. Automatically generating spatial audio such as first-order ambisonics (FOA) from video therefore remains an important but challenging problem. In complex scenes, sound perception depends not only on sound source locations but also on scene geometry, materials, and dynamic interactions with the environment. However, existing approaches only rely on visual cues and fail to model dynamic sources and acoustic effects such as occlusion, reflections, and reverberation. To address these challenges, we propose DynFOA, a generative framework that synthesizes FOA from 360-degree videos by integrating dynamic scene reconstruction with conditional diffusion modeling. DynFOA analyzes the input video to detect and localize dynamic sound sources, estimate depth and semantics, and reconstruct scene geometry and materials using 3D Gaussian Splatting (3DGS). The reconstructed scene representation provides physically grounded features that capture acoustic interactions between sources, environment, and listener viewpoint. Conditioned on these features, a diffusion model generates spatial audio consistent with the scene dynamics and acoustic context. We introduce M2G-360, a dataset of 600 real-world clips divided into MoveSources, Multi-Source, and Geometry subsets for evaluating robustness under diverse conditions. Experiments show that DynFOA consistently outperforms existing methods in spatial accuracy, acoustic fidelity, distribution matching, and perceived immersive experience.

DynFOA: Generating First-Order Ambisonics with Conditional Diffusion for Dynamic and Acoustically Complex 360-Degree Videos

Abstract

Spatial audio is crucial for immersive 360-degree video experiences, yet most 360-degree videos lack it due to the difficulty of capturing spatial audio during recording. Automatically generating spatial audio such as first-order ambisonics (FOA) from video therefore remains an important but challenging problem. In complex scenes, sound perception depends not only on sound source locations but also on scene geometry, materials, and dynamic interactions with the environment. However, existing approaches only rely on visual cues and fail to model dynamic sources and acoustic effects such as occlusion, reflections, and reverberation. To address these challenges, we propose DynFOA, a generative framework that synthesizes FOA from 360-degree videos by integrating dynamic scene reconstruction with conditional diffusion modeling. DynFOA analyzes the input video to detect and localize dynamic sound sources, estimate depth and semantics, and reconstruct scene geometry and materials using 3D Gaussian Splatting (3DGS). The reconstructed scene representation provides physically grounded features that capture acoustic interactions between sources, environment, and listener viewpoint. Conditioned on these features, a diffusion model generates spatial audio consistent with the scene dynamics and acoustic context. We introduce M2G-360, a dataset of 600 real-world clips divided into MoveSources, Multi-Source, and Geometry subsets for evaluating robustness under diverse conditions. Experiments show that DynFOA consistently outperforms existing methods in spatial accuracy, acoustic fidelity, distribution matching, and perceived immersive experience.

Paper Structure

This paper contains 35 sections, 7 equations, 2 figures, 5 tables.

Figures (2)

  • Figure 1: Architecture Overview of the Proposed DynFOA Backbone. (1) The Video Encoder reconstructs 3D scene geometry from the 360-degree video via source detection, depth estimation, and semantic segmentation, extracting explicit physical features like occlusion, reflections, and reverberation. (2) The FOA Latent Encoder enhances the spatial audio robustness against occlusion, reflections, and reverberation through dynamic sound source processing. Note that this module is utilized only during training to encode ground truth FOA into latent targets. (3) The Conditional Diffusion Generator acts as the core synthesizer; it employs a Multi-Condition Encoder and Cross-Modal Fusion to guide a U-Net denoiser. During the inference, DynFOA drops the FOA Latent Encoder operating purely on video-conditioned diffusion to output high-fidelity spatial audio from the 360-degree video, followed by a pretrained VAE Decoder.
  • Figure 2: Visualization comparison of Mel-spectrogram for the FOA channels ($W$, $X$, $Y$, $Z$) in a complex indoor piano environment from M2G-360. While state-of-the-art baselines suffer from severe high-frequency attenuation, temporal discontinuities, and loss of inter-channel spatial correlation, our DynFOA successfully reconstructs the full harmonic structure and spatial energy distribution, closely matching the GT. This visually demonstrates that integrating 3D geometry and material priors into the conditional diffusion process effectively prevents acoustic degradation and preserves true physical acoustic coherence in reverberant environments.