VAST stands for virtual acoustic space traveling and is a new paradigm for learning-based sound source localization and audio scene geometry estimation. Most existing methods that estimate the position of a sound source or other audio geometrical properties are either based on an approximate physical model (physics-driven) or on a specific-purpose calibration set (data-driven). With VAST, the idea is to learn a mapping from audio features to desired geometrical properties using a massive dataset of simulated room impulse responses. The dataset is designed to be maximally representative of the potential audio scenes the considered system may be evolving in while remaining reasonably compact. The aim is to demonstrate the good generalizability of mappings learned on a virtual datasets to real-world data and to provide a useful tool for research teams interested in sound source localization.
Clément Gaultier, Saurabh Kataria, Diego Di Carlo and myself are working on the release of datasets for VAST. Two binaural datasets are already available on the project website. We co-authored two publications demonstrating this paradigm for binaural 3D sound source localization and wall absorption estimations using these datasets.
- Website: http://theVASTproject.inria.fr
- References:
- VAST : The Virtual Acoustic Space Traveler Dataset, Clément Gaultier, Saurabh Kataria, Antoine Deleforge, International Conference on Latent Variable Analysis and Signal Separation (LVA/ICA), Feb 2017, Grenoble, France.
- Hearing in a shoe-box : binaural source position and wall absorption estimation using virtually supervised learning, Saurabh Kataria, Clément Gaultier, Antoine Deleforge, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Mar 2017, New-Orleans, United States.