SoundVista: Novel-View Ambient Sound Synthesis via Visual-Acoustic Binding

CVPR 2025 (Highlight)

1 University of Washington    2 Codec Avatars Lab, Pittsburgh PA, Meta
3 Reality Labs Research, Meta
(* Work done during an internship at Meta)

teaser

SoundVista is a novel method that synthesizes binaural ambient sound for arbitrary scenes from novel viewpoints. Our method leverages pre-acquired audio recordings and visual data captured from sparsely distributed reference points and synthesizes binaural audio consistent with the target 3D position and pose.

Abstract

We introduce SoundVista, a method to generate the ambient sound of an arbitrary scene at novel viewpoints. Given a pre-acquired recording of the scene from sparsely distributed microphones, SoundVista can synthesize the sound of that scene from an unseen target viewpoint. The method learns the underlying acoustic transfer function that relates the signals acquired at the distributed microphones to the signal at the target viewpoint, using a limited number of known recordings. Unlike existing works, our method does not require constraints or prior knowledge of sound source details. Moreover, our method efficiently adapts to diverse room layouts, reference microphone configurations and unseen environments. To enable this, we introduce a visual-acoustic binding module that learns visual embeddings linked with local acoustic properties from panoramic RGB and depth data. We first leverage these embeddings to optimize the placement of reference microphones in any given scene. During synthesis, we leverage multiple embeddings extracted from reference locations to get adaptive weights for their contribution, conditioned on target viewpoint. We benchmark the task on both publicly available data and real-world settings. We demonstrate significant improvements over existing methods.

Demo Videos

Please note: unmute the audio and listen with headphones for best experience.

SoundVista Pipeline

teaser

Details of the SoundVista Pipeline: (a) The reference location sampler selects optimal reference locations leveraging embeddings from visual-acoustic binding (VAB). (b) The reference integration transformer uses VAB embeddings to derive contribution weights for each reference. (c) Reweighting by contribution weights adjusts and integrates reference recording channels and pose conditioning for precise sound synthesis. (d) The spatial audio renderer converts reweighted channels and conditions to binaural sound at the target viewpoint.

Visualization Analysis

teaser

Visual-Acoustic Binding helps clustering to align better with room partitions and indicate informative reference sampling locations.

teaser

Visualization of our results for qualitative comparison with several competitive sound synthesis baselines: ViGAS, BEE, and Few-shotRIR.

BibTeX

@misc{chen2025soundvistanovelviewambientsound,
      title={SoundVista: Novel-View Ambient Sound Synthesis via Visual-Acoustic Binding}, 
      author={Mingfei Chen and Israel D. Gebru and Ishwarya Ananthabhotla and Christian Richardt and Dejan Markovic and Jake Sandakly and Steven Krenn and Todd Keebler and Eli Shlizerman and Alexander Richard},
      year={2025},
      eprint={2504.05576},
      archivePrefix={arXiv},
      primaryClass={cs.SD},
      url={https://arxiv.org/abs/2504.05576}, 
}